threads
listlengths
1
2.99k
[ { "msg_contents": "If the subtransaction cache is overflowed in some of the transactions\nthen it will affect all the concurrent queries as they need to access\nthe SLRU for checking the visibility of each tuple. But currently\nthere is no way to identify whether in any backend subtransaction is\noverflowed or what is the current active subtransaction count.\nAttached patch adds subtransaction count and subtransaction overflow\nstatus in pg_stat_activity. I have implemented this because of the\nrecent complain about the same[1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-t5BkwdHm1bV8ez64guWZJB_Jjhb7arsQsftxEwpYwObg%40mail.gmail.com\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Dec 2021 09:46:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 6, 2021 at 8:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> If the subtransaction cache is overflowed in some of the transactions\n> then it will affect all the concurrent queries as they need to access\n> the SLRU for checking the visibility of each tuple. But currently\n> there is no way to identify whether in any backend subtransaction is\n> overflowed or what is the current active subtransaction count.\n> Attached patch adds subtransaction count and subtransaction overflow\n> status in pg_stat_activity. I have implemented this because of the\n> recent complain about the same[1]\n>\n> [1]\n> https://www.postgresql.org/message-id/CAFiTN-t5BkwdHm1bV8ez64guWZJB_Jjhb7arsQsftxEwpYwObg%40mail.gmail.com\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\nHi,\n\nbq. there is a no way to\n\nExtra 'a' before no.\n\n+ * Number of active subtransaction in the current session.\n\nsubtransaction -> subtransactions\n\n+ * Whether subxid count overflowed in the current session.\n\nIt seems 'count' can be dropped from the sentence.\n\nCheers\n\nOn Mon, Dec 6, 2021 at 8:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:If the subtransaction cache is overflowed in some of the transactions\nthen it will affect all the concurrent queries as they need to access\nthe SLRU for checking the visibility of each tuple.  But currently\nthere is no way to identify whether in any backend subtransaction is\noverflowed or what is the current active subtransaction count.\nAttached patch adds subtransaction count and subtransaction overflow\nstatus in pg_stat_activity.  I have implemented this because of the\nrecent complain about the same[1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-t5BkwdHm1bV8ez64guWZJB_Jjhb7arsQsftxEwpYwObg%40mail.gmail.com\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.comHi,bq. there is a no way to Extra 'a' before no.+    * Number of active subtransaction in the current session.subtransaction -> subtransactions+    * Whether subxid count overflowed in the current session.It seems 'count' can be dropped from the sentence.Cheers", "msg_date": "Mon, 6 Dec 2021 20:30:29 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 6, 2021 at 8:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> If the subtransaction cache is overflowed in some of the transactions\n> then it will affect all the concurrent queries as they need to access\n> the SLRU for checking the visibility of each tuple. But currently\n> there is no way to identify whether in any backend subtransaction is\n> overflowed or what is the current active subtransaction count.\n\n\nI think it's a good idea – had the same need when recently researching\nvarious issues with subtransactions [1], needed to patch Postgres in\nbenchmarking environments. To be fair, there is a way to understand that\nthe overflowed state is reached for PG 13+ – on standbys, observe reads in\nSubtrans in pg_stat_slru. But of course, it's an indirect way.\n\nI see that the patch adds two new columns to pg_stat_activity:\nsubxact_count and subxact_overflowed. This should be helpful to have.\nAdditionally, exposing the lastOverflowedXid value would be also good for\ntroubleshooting of subtransaction edge and corner cases – a bug recently\nfixed in all current versions [2] was really tricky to troubleshoot in\nproduction because this value is not visible to DBAs.\n\n[1]\nhttps://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful\n[2] https://commitfest.postgresql.org/36/3399/\n\nOn Mon, Dec 6, 2021 at 8:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:If the subtransaction cache is overflowed in some of the transactions\nthen it will affect all the concurrent queries as they need to access\nthe SLRU for checking the visibility of each tuple.  But currently\nthere is no way to identify whether in any backend subtransaction is\noverflowed or what is the current active subtransaction count.I think it's a good idea – had the same need when recently researching various issues with subtransactions [1], needed to patch Postgres in benchmarking environments. To be fair, there is a way to understand that the overflowed state is reached for PG 13+ – on standbys, observe reads in Subtrans in pg_stat_slru. But of course, it's an indirect way.I see that the patch adds two new columns to pg_stat_activity: subxact_count and subxact_overflowed. This should be helpful to have. Additionally, exposing the lastOverflowedXid value would be also good for troubleshooting of subtransaction edge and corner cases – a bug recently fixed in all current versions [2] was really tricky to troubleshoot in production because this value is not visible to DBAs.[1] https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful[2] https://commitfest.postgresql.org/36/3399/", "msg_date": "Mon, 6 Dec 2021 20:59:12 -0800", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subxact_count</structfield> <type>xid</type>\n> + </para>\n> + <para>\n> + The current backend's active subtransactions count.\n\nsubtransaction (no s)\n\n> + Set to true if current backend's subtransaction cache is overflowed.\n\nSay \"has overflowed\"\n\n> +\t\tif (local_beentry->subxact_count > 0)\n> +\t\t{\n> +\t\t\tvalues[30] = local_beentry->subxact_count;\n> +\t\t\tvalues[31] = local_beentry->subxact_overflowed;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tnulls[30] = true;\n> +\t\t\tnulls[31] = true;\n> +\t\t}\n\nWhy is the subxact count set to NULL instead of zero ?\n\nYou added this to pg_stat_activity, which already has a lot of fields.\nWe talked a few months ago about not adding more fields that weren't commonly\nused.\nhttps://www.postgresql.org/message-id/flat/20210426191811.sp3o77doinphyjhu%40alap3.anarazel.de#d96d0a116f0344301eead2676ea65b2e\n\nSince I think this field is usually not interesting to most users of\npg_stat_activity, maybe this should instead be implemented as a function like\npg_backend_get_subxact_status(pid).\n\nPeople who want to could use it like:\nSELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Dec 2021 23:41:24 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\nThanks for the review I will work on these comments.\n\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>subxact_count</structfield> <type>xid</type>\n> > + </para>\n> > + <para>\n> > + The current backend's active subtransactions count.\n>\n> subtransaction (no s)\n>\n> > + Set to true if current backend's subtransaction cache is overflowed.\n>\n> Say \"has overflowed\"\n>\n> > + if (local_beentry->subxact_count > 0)\n> > + {\n> > + values[30] = local_beentry->subxact_count;\n> > + values[31] = local_beentry->subxact_overflowed;\n> > + }\n> > + else\n> > + {\n> > + nulls[30] = true;\n> > + nulls[31] = true;\n> > + }\n>\n> Why is the subxact count set to NULL instead of zero ?\n\n> You added this to pg_stat_activity, which already has a lot of fields.\n> We talked a few months ago about not adding more fields that weren't commonly\n> used.\n> https://www.postgresql.org/message-id/flat/20210426191811.sp3o77doinphyjhu%40alap3.anarazel.de#d96d0a116f0344301eead2676ea65b2e\n>\n> Since I think this field is usually not interesting to most users of\n> pg_stat_activity, maybe this should instead be implemented as a function like\n> pg_backend_get_subxact_status(pid).\n>\n> People who want to could use it like:\n> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n\nYeah, this is a valid point, I will change this accordingly.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:37:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 7, 2021 at 10:29 AM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n>\n> On Mon, Dec 6, 2021 at 8:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> If the subtransaction cache is overflowed in some of the transactions\n>> then it will affect all the concurrent queries as they need to access\n>> the SLRU for checking the visibility of each tuple. But currently\n>> there is no way to identify whether in any backend subtransaction is\n>> overflowed or what is the current active subtransaction count.\n>\n>\n> I think it's a good idea – had the same need when recently researching various issues with subtransactions [1], needed to patch Postgres in benchmarking environments. To be fair, there is a way to understand that the overflowed state is reached for PG 13+ – on standbys, observe reads in Subtrans in pg_stat_slru. But of course, it's an indirect way.\n\nYeah right.\n\n> I see that the patch adds two new columns to pg_stat_activity: subxact_count and subxact_overflowed. This should be helpful to have. Additionally, exposing the lastOverflowedXid value would be also good for troubleshooting of subtransaction edge and corner cases – a bug recently fixed in all current versions [2] was really tricky to troubleshoot in production because this value is not visible to DBAs.\n\nYeah, we can show this too, although we need to take ProcArrayLock in\nthe shared mode for reading this, but anyway that will be done on\nusers request so should not be an issue IMHO.\n\nI will post the updated patch soon along with comments given by\nZhihong Yu and Justin.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:40:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On 12/6/21, 8:19 PM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\r\n> If the subtransaction cache is overflowed in some of the transactions\r\n> then it will affect all the concurrent queries as they need to access\r\n> the SLRU for checking the visibility of each tuple. But currently\r\n> there is no way to identify whether in any backend subtransaction is\r\n> overflowed or what is the current active subtransaction count.\r\n> Attached patch adds subtransaction count and subtransaction overflow\r\n> status in pg_stat_activity. I have implemented this because of the\r\n> recent complain about the same[1]\r\n\r\nI'd like to give a general +1 to this effort. Thanks for doing this!\r\nI've actually created a function to provide this information in the\r\npast, so I will help review.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Dec 2021 17:24:36 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "I also want to +1 this this effort. Exposing subtransaction usage is very useful.\r\n\r\nIt would also be extremely beneficial to add both subtransaction usage and overflow counters to pg_stat_database. \r\n\r\nMonitoring tools that capture deltas on pg_stat_database will be able to generate historical analysis and usage trends of subtransactions.\r\n\r\nOn 12/7/21, 5:34 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 12/6/21, 8:19 PM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\r\n > If the subtransaction cache is overflowed in some of the transactions\r\n > then it will affect all the concurrent queries as they need to access\r\n > the SLRU for checking the visibility of each tuple. But currently\r\n > there is no way to identify whether in any backend subtransaction is\r\n > overflowed or what is the current active subtransaction count.\r\n > Attached patch adds subtransaction count and subtransaction overflow\r\n > status in pg_stat_activity. I have implemented this because of the\r\n > recent complain about the same[1]\r\n\r\n I'd like to give a general +1 to this effort. Thanks for doing this!\r\n I've actually created a function to provide this information in the\r\n past, so I will help review.\r\n\r\n Nathan\r\n\r\n\r\n", "msg_date": "Tue, 7 Dec 2021 23:45:51 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n>\n> You added this to pg_stat_activity, which already has a lot of fields.\n> We talked a few months ago about not adding more fields that weren't commonly\n> used.\n> https://www.postgresql.org/message-id/flat/20210426191811.sp3o77doinphyjhu%40alap3.anarazel.de#d96d0a116f0344301eead2676ea65b2e\n>\n> Since I think this field is usually not interesting to most users of\n> pg_stat_activity, maybe this should instead be implemented as a function like\n> pg_backend_get_subxact_status(pid).\n>\n> People who want to could use it like:\n> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n\nI have provided two function, one for subtransaction counts and other\nwhether subtransaction cache is overflowed or not, we can use like\nthis, if we think this is better way to do it then we can also add\nanother function for the lastOverflowedXid\n\npostgres[43994]=# select id, pg_stat_get_backend_pid(id) as pid,\npg_stat_get_backend_subxact_count(id) as nsubxact,\npg_stat_get_backend_subxact_overflow(id) as overflowed from\npg_stat_get_backend_idset() as id;\n id | pid | nsubxact | overflowed\n----+-------+----------+------------\n 1 | 43806 | 0 | f\n 2 | 43983 | 64 | t\n 3 | 43994 | 0 | f\n 4 | 44323 | 22 | f\n 5 | 43802 | 0 | f\n 6 | 43801 | 0 | f\n 7 | 43804 | 0 | f\n(7 rows)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Dec 2021 19:58:26 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On 12/13/21, 6:30 AM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\r\n> On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n>> Since I think this field is usually not interesting to most users of\r\n>> pg_stat_activity, maybe this should instead be implemented as a function like\r\n>> pg_backend_get_subxact_status(pid).\r\n>>\r\n>> People who want to could use it like:\r\n>> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\r\n>\r\n> I have provided two function, one for subtransaction counts and other\r\n> whether subtransaction cache is overflowed or not, we can use like\r\n> this, if we think this is better way to do it then we can also add\r\n> another function for the lastOverflowedXid\r\n\r\nThe general approach looks good to me. I think we could have just one\r\nfunction for all three values, though.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 13 Dec 2021 22:27:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Hi,\n\nI have looked into the v2 patch and here are my comments:\n\n+ PG_RETURN_INT32(local_beentry->subxact_overflowed);\n+}\n\nShould this be PG_RETURN_BOOL instead of PG_RETURN_INT32??\n\n--\n\n+{ oid => '6107', descr => 'statistics: cached subtransaction count of\nbackend',\n+ proname => 'pg_stat_get_backend_subxact_count', provolatile => 's',\nproparallel => 'r',\n+ prorettype => 'int4', proargtypes => 'int4',\n+ prosrc => 'pg_stat_get_backend_subxact_count' },\n+{ oid => '6108', descr => 'statistics: subtransaction cache of backend\noverflowed',\n+ proname => 'pg_stat_get_backend_subxact_overflow', provolatile => 's',\nproparallel => 'r',\n+ prorettype => 'bool', proargtypes => 'int4',\n+ prosrc => 'pg_stat_get_backend_subxact_overflow' },\n\nThe description says that the two new functions show the statistics for\n\"cached subtransaction count of backend\" and \"subtransaction cache of\nbackend overflowed\". But, when these functions are called it shows these\nstats for the non-backend processes like checkpointer, walwriter etc as\nwell. Should that happen?\n\n--\n\ntypedef struct LocalPgBackendStatus\n * not.\n */\n TransactionId backend_xmin;\n+\n+ /*\n+ * Number of cached subtransactions in the current session.\n+ */\n+ int subxact_count;\n+\n+ /*\n+ * The number of subtransactions in the current session exceeded the\ncached\n+ * subtransaction limit.\n+ */\n+ bool subxact_overflowed;\n\nAll the variables inside this LocalPgBackendStatus structure are prefixed\nwith \"backend\" word. Can we do the same for the newly added variables as\nwell?\n\n--\n\n+ * Get the xid and xmin, nsubxid and overflow status of the backend.\nThe\n\nShould this be put as - \"xid, xmin, nsubxid and overflow\" instead of \"xid\nand xmin, nsubxid and overflow\"?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\nOn Mon, Dec 13, 2021 at 7:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n> >\n> > You added this to pg_stat_activity, which already has a lot of fields.\n> > We talked a few months ago about not adding more fields that weren't\n> commonly\n> > used.\n> >\n> https://www.postgresql.org/message-id/flat/20210426191811.sp3o77doinphyjhu%40alap3.anarazel.de#d96d0a116f0344301eead2676ea65b2e\n> >\n> > Since I think this field is usually not interesting to most users of\n> > pg_stat_activity, maybe this should instead be implemented as a function\n> like\n> > pg_backend_get_subxact_status(pid).\n> >\n> > People who want to could use it like:\n> > SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid)\n> sub;\n>\n> I have provided two function, one for subtransaction counts and other\n> whether subtransaction cache is overflowed or not, we can use like\n> this, if we think this is better way to do it then we can also add\n> another function for the lastOverflowedXid\n>\n> postgres[43994]=# select id, pg_stat_get_backend_pid(id) as pid,\n> pg_stat_get_backend_subxact_count(id) as nsubxact,\n> pg_stat_get_backend_subxact_overflow(id) as overflowed from\n> pg_stat_get_backend_idset() as id;\n> id | pid | nsubxact | overflowed\n> ----+-------+----------+------------\n> 1 | 43806 | 0 | f\n> 2 | 43983 | 64 | t\n> 3 | 43994 | 0 | f\n> 4 | 44323 | 22 | f\n> 5 | 43802 | 0 | f\n> 6 | 43801 | 0 | f\n> 7 | 43804 | 0 | f\n> (7 rows)\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,I have looked into the v2 patch and here are my comments:+   PG_RETURN_INT32(local_beentry->subxact_overflowed);+}Should this be PG_RETURN_BOOL instead of PG_RETURN_INT32??--+{ oid => '6107', descr => 'statistics: cached subtransaction count of backend',+  proname => 'pg_stat_get_backend_subxact_count', provolatile => 's', proparallel => 'r',+  prorettype => 'int4', proargtypes => 'int4',+  prosrc => 'pg_stat_get_backend_subxact_count' },+{ oid => '6108', descr => 'statistics: subtransaction cache of backend overflowed',+  proname => 'pg_stat_get_backend_subxact_overflow', provolatile => 's', proparallel => 'r',+  prorettype => 'bool', proargtypes => 'int4',+  prosrc => 'pg_stat_get_backend_subxact_overflow' },The description says that the two new functions show the statistics for \"cached subtransaction count of backend\" and \"subtransaction cache of backend overflowed\". But, when these functions are called it shows these stats for the non-backend processes like checkpointer, walwriter etc as well. Should that happen?--typedef struct LocalPgBackendStatus     * not.     */    TransactionId backend_xmin;++   /*+    * Number of cached subtransactions in the current session.+    */+   int subxact_count;++   /*+    * The number of subtransactions in the current session exceeded the cached+    * subtransaction limit.+    */+   bool subxact_overflowed;All the variables inside this LocalPgBackendStatus structure are prefixed with \"backend\" word. Can we do the same for the newly added variables as well?--+ *     Get the xid and xmin, nsubxid and overflow status of the backend. TheShould this be put as - \"xid, xmin, nsubxid and overflow\" instead of \"xid and xmin, nsubxid and overflow\"?--With Regards,Ashutosh Sharma.On Mon, Dec 13, 2021 at 7:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n>\n> You added this to pg_stat_activity, which already has a lot of fields.\n> We talked a few months ago about not adding more fields that weren't commonly\n> used.\n> https://www.postgresql.org/message-id/flat/20210426191811.sp3o77doinphyjhu%40alap3.anarazel.de#d96d0a116f0344301eead2676ea65b2e\n>\n> Since I think this field is usually not interesting to most users of\n> pg_stat_activity, maybe this should instead be implemented as a function like\n> pg_backend_get_subxact_status(pid).\n>\n> People who want to could use it like:\n> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n\nI have provided two function, one for subtransaction counts and other\nwhether subtransaction cache is overflowed or not, we can use like\nthis,  if we think this is better way to do it then we can also add\nanother function for the lastOverflowedXid\n\npostgres[43994]=# select id, pg_stat_get_backend_pid(id) as pid,\npg_stat_get_backend_subxact_count(id) as nsubxact,\npg_stat_get_backend_subxact_overflow(id) as overflowed from\npg_stat_get_backend_idset() as id;\n id |  pid  | nsubxact | overflowed\n----+-------+----------+------------\n  1 | 43806 |        0 | f\n  2 | 43983 |       64 | t\n  3 | 43994 |        0 | f\n  4 | 44323 |       22 | f\n  5 | 43802 |        0 | f\n  6 | 43801 |        0 | f\n  7 | 43804 |        0 | f\n(7 rows)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Dec 2021 18:23:43 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 14, 2021 at 3:57 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/13/21, 6:30 AM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\n> > On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> Since I think this field is usually not interesting to most users of\n> >> pg_stat_activity, maybe this should instead be implemented as a function like\n> >> pg_backend_get_subxact_status(pid).\n> >>\n> >> People who want to could use it like:\n> >> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n> >\n> > I have provided two function, one for subtransaction counts and other\n> > whether subtransaction cache is overflowed or not, we can use like\n> > this, if we think this is better way to do it then we can also add\n> > another function for the lastOverflowedXid\n>\n> The general approach looks good to me. I think we could have just one\n> function for all three values, though.\n\nIf we create just one function then the output type will be a tuple\nthen we might have to add another view on top of that. Is there any\nbetter way to do that?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Dec 2021 09:00:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 14, 2021 at 6:23 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> I have looked into the v2 patch and here are my comments:\n>\n> + PG_RETURN_INT32(local_beentry->subxact_overflowed);\n> +}\n>\n> Should this be PG_RETURN_BOOL instead of PG_RETURN_INT32??\n>\n> --\n>\n> +{ oid => '6107', descr => 'statistics: cached subtransaction count of backend',\n> + proname => 'pg_stat_get_backend_subxact_count', provolatile => 's', proparallel => 'r',\n> + prorettype => 'int4', proargtypes => 'int4',\n> + prosrc => 'pg_stat_get_backend_subxact_count' },\n> +{ oid => '6108', descr => 'statistics: subtransaction cache of backend overflowed',\n> + proname => 'pg_stat_get_backend_subxact_overflow', provolatile => 's', proparallel => 'r',\n> + prorettype => 'bool', proargtypes => 'int4',\n> + prosrc => 'pg_stat_get_backend_subxact_overflow' },\n>\n> The description says that the two new functions show the statistics for \"cached subtransaction count of backend\" and \"subtransaction cache of backend overflowed\". But, when these functions are called it shows these stats for the non-backend processes like checkpointer, walwriter etc as well. Should that happen?\n>\n> --\n>\n> typedef struct LocalPgBackendStatus\n> * not.\n> */\n> TransactionId backend_xmin;\n> +\n> + /*\n> + * Number of cached subtransactions in the current session.\n> + */\n> + int subxact_count;\n> +\n> + /*\n> + * The number of subtransactions in the current session exceeded the cached\n> + * subtransaction limit.\n> + */\n> + bool subxact_overflowed;\n>\n> All the variables inside this LocalPgBackendStatus structure are prefixed with \"backend\" word. Can we do the same for the newly added variables as well?\n>\n> --\n>\n> + * Get the xid and xmin, nsubxid and overflow status of the backend. The\n>\n> Should this be put as - \"xid, xmin, nsubxid and overflow\" instead of \"xid and xmin, nsubxid and overflow\"?\n\nThanks, Ashutosh, I will work on your comments and post an updated\nversion by next week.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Dec 2021 09:01:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Fri, Dec 17, 2021 at 09:00:04AM +0530, Dilip Kumar wrote:\n> On Tue, Dec 14, 2021 at 3:57 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >\n> > On 12/13/21, 6:30 AM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\n> > > On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >> Since I think this field is usually not interesting to most users of\n> > >> pg_stat_activity, maybe this should instead be implemented as a function like\n> > >> pg_backend_get_subxact_status(pid).\n> > >>\n> > >> People who want to could use it like:\n> > >> SELECT * FROM pg_stat_activity psa, pg_backend_get_subxact_status(pid) sub;\n> > >\n> > > I have provided two function, one for subtransaction counts and other\n> > > whether subtransaction cache is overflowed or not, we can use like\n> > > this, if we think this is better way to do it then we can also add\n> > > another function for the lastOverflowedXid\n> >\n> > The general approach looks good to me. I think we could have just one\n> > function for all three values, though.\n> \n> If we create just one function then the output type will be a tuple\n> then we might have to add another view on top of that. Is there any\n> better way to do that?\n\nI don't think you'd need to add a view on top of it.\n\nCompare:\n\npostgres=# SELECT 1, pg_config() LIMIT 1;\n ?column? | pg_config \n----------+----------------------------\n 1 | (BINDIR,/usr/pgsql-14/bin)\n\npostgres=# SELECT 1, c FROM pg_config() c LIMIT 1;\n ?column? | c \n----------+----------------------------\n 1 | (BINDIR,/usr/pgsql-14/bin)\n\npostgres=# SELECT 1, c.* FROM pg_config() c LIMIT 1;\n ?column? | name | setting \n----------+--------+-------------------\n 1 | BINDIR | /usr/pgsql-14/bin\n\n\n", "msg_date": "Thu, 16 Dec 2021 22:02:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Fri, Dec 17, 2021 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Dec 17, 2021 at 09:00:04AM +0530, Dilip Kumar wrote:\n> > On Tue, Dec 14, 2021 at 3:57 AM Bossart, Nathan <bossartn@amazon.com>\n> wrote:\n> > >\n> > > On 12/13/21, 6:30 AM, \"Dilip Kumar\" <dilipbalaut@gmail.com> wrote:\n> > > > On Tue, Dec 7, 2021 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > >> Since I think this field is usually not interesting to most users of\n> > > >> pg_stat_activity, maybe this should instead be implemented as a\n> function like\n> > > >> pg_backend_get_subxact_status(pid).\n> > > >>\n> > > >> People who want to could use it like:\n> > > >> SELECT * FROM pg_stat_activity psa,\n> pg_backend_get_subxact_status(pid) sub;\n> > > >\n> > > > I have provided two function, one for subtransaction counts and other\n> > > > whether subtransaction cache is overflowed or not, we can use like\n> > > > this, if we think this is better way to do it then we can also add\n> > > > another function for the lastOverflowedXid\n> > >\n> > > The general approach looks good to me. I think we could have just one\n> > > function for all three values, though.\n> >\n> > If we create just one function then the output type will be a tuple\n> > then we might have to add another view on top of that. Is there any\n> > better way to do that?\n>\n> I don't think you'd need to add a view on top of it.\n>\n> Compare:\n>\n> postgres=# SELECT 1, pg_config() LIMIT 1;\n> ?column? | pg_config\n> ----------+----------------------------\n> 1 | (BINDIR,/usr/pgsql-14/bin)\n>\n> postgres=# SELECT 1, c FROM pg_config() c LIMIT 1;\n> ?column? | c\n> ----------+----------------------------\n> 1 | (BINDIR,/usr/pgsql-14/bin)\n>\n> postgres=# SELECT 1, c.* FROM pg_config() c LIMIT 1;\n> ?column? | name | setting\n> ----------+--------+-------------------\n> 1 | BINDIR | /usr/pgsql-14/bin\n>\n\nOkay, that makes sense, I have modified it to make a single function.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 13 Jan 2022 15:55:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 14, 2021 at 6:23 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\nIn the latest patch I have fixed comments given here except a few.\n\nI have looked into the v2 patch and here are my comments:\n\n+ PG_RETURN_INT32(local_beentry->subxact_overflowed);\n+}\n\nShould this be PG_RETURN_BOOL instead of PG_RETURN_INT32??\n\nWith the new patch this is not relevant because we are returning a tuple.\n\n+{ oid => '6107', descr => 'statistics: cached subtransaction count of\nbackend',\n+ proname => 'pg_stat_get_backend_subxact_count', provolatile => 's',\nproparallel => 'r',\n+ prorettype => 'int4', proargtypes => 'int4',\n+ prosrc => 'pg_stat_get_backend_subxact_count' },\n+{ oid => '6108', descr => 'statistics: subtransaction cache of backend\noverflowed',\n+ proname => 'pg_stat_get_backend_subxact_overflow', provolatile => 's',\nproparallel => 'r',\n+ prorettype => 'bool', proargtypes => 'int4',\n+ prosrc => 'pg_stat_get_backend_subxact_overflow' },\n\nThe description says that the two new functions show the statistics for\n\"cached subtransaction count of backend\" and \"subtransaction cache of\nbackend overflowed\". But, when these functions are called it shows these\nstats for the non-backend processes like checkpointer, walwriter etc as\nwell. Should that happen?\n\nI am following similar description as pg_stat_get_backend_pid,\npg_stat_get_backend_idset and other relevant functioins.\n\ntypedef struct LocalPgBackendStatus\n * not.\n */\n TransactionId backend_xmin;\n+\n+ /*\n+ * Number of cached subtransactions in the current session.\n+ */\n+ int subxact_count;\n+\n+ /*\n+ * The number of subtransactions in the current session exceeded the\ncached\n+ * subtransaction limit.\n+ */\n+ bool subxact_overflowed;\n\nAll the variables inside this LocalPgBackendStatus structure are prefixed\nwith \"backend\" word. Can we do the same for the newly added variables as\nwell?\n\nDone\n\n+ * Get the xid and xmin, nsubxid and overflow status of the backend.\nThe\n\nShould this be put as - \"xid, xmin, nsubxid and overflow\" instead of \"xid\nand xmin, nsubxid and overflow\"?\n\nI missed to fix this one in the last patch so updated again.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 13 Jan 2022 16:01:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Thanks for the new patch!\r\n\r\n+ <para>\r\n+ Returns a record of information about the backend's subtransactions.\r\n+ The fields returned are <parameter>subxact_count</parameter> identifies\r\n+ number of active subtransaction and <parameter>subxact_overflow\r\n+ </parameter> shows whether the backend's subtransaction cache is\r\n+ overflowed or not.\r\n+ </para></entry>\r\n+ </para></entry>\r\n\r\nnitpick: There is an extra \"</para></entry>\" here.\r\n\r\nWould it be more accurate to say that subxact_count is the number of\r\nsubxids that are cached? It can only ever go up to\r\nPGPROC_MAX_CACHED_SUBXIDS.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 13 Jan 2022 22:27:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Thu, Jan 13, 2022 at 10:27:31PM +0000, Bossart, Nathan wrote:\n> Thanks for the new patch!\n> \n> + <para>\n> + Returns a record of information about the backend's subtransactions.\n> + The fields returned are <parameter>subxact_count</parameter> identifies\n> + number of active subtransaction and <parameter>subxact_overflow\n> + </parameter> shows whether the backend's subtransaction cache is\n> + overflowed or not.\n> + </para></entry>\n> + </para></entry>\n> \n> nitpick: There is an extra \"</para></entry>\" here.\n\nAlso the sentence looks a bit weird. I think something like that would be\nbetter:\n\n> + Returns a record of information about the backend's subtransactions.\n> + The fields returned are <parameter>subxact_count</parameter>, which\n> + identifies the number of active subtransaction and <parameter>subxact_overflow\n> + </parameter>, which shows whether the backend's subtransaction cache is\n> + overflowed or not.\n\nWhile on the sub-transaction overflow topic, I'm wondering if we should also\nraise a warning (maybe optionally) immediately when a backend overflows (so in\nGetNewTransactionId()).\n\nLike many I previously had to investigate a slowdown due to sub-transaction\noverflow, and even with the information available in a monitoring view (I had\nto rely on a quick hackish extension as I couldn't patch postgres) it's not\nterribly fun to do this way. On top of that log analyzers like pgBadger could\nhelp to highlight such a problem.\n\nI don't want to derail this thread so let me know if I should start a distinct\ndiscussion for that.\n\n\n", "msg_date": "Fri, 14 Jan 2022 15:47:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 14, 2022 at 1:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Jan 13, 2022 at 10:27:31PM +0000, Bossart, Nathan wrote:\n> > Thanks for the new patch!\n> >\n> > + <para>\n> > + Returns a record of information about the backend's\n> subtransactions.\n> > + The fields returned are <parameter>subxact_count</parameter>\n> identifies\n> > + number of active subtransaction and <parameter>subxact_overflow\n> > + </parameter> shows whether the backend's subtransaction cache is\n> > + overflowed or not.\n> > + </para></entry>\n> > + </para></entry>\n> >\n> > nitpick: There is an extra \"</para></entry>\" here.\n>\n> Also the sentence looks a bit weird. I think something like that would be\n> better:\n>\n> > + Returns a record of information about the backend's\n> subtransactions.\n> > + The fields returned are <parameter>subxact_count</parameter>,\n> which\n> > + identifies the number of active subtransaction and\n> <parameter>subxact_overflow\n> > + </parameter>, which shows whether the backend's subtransaction\n> cache is\n> > + overflowed or not.\n>\n\nThanks for looking into this, I will work on this next week.\n\n\n> While on the sub-transaction overflow topic, I'm wondering if we should\n> also\n> raise a warning (maybe optionally) immediately when a backend overflows\n> (so in\n> GetNewTransactionId()).\n>\n> Like many I previously had to investigate a slowdown due to sub-transaction\n> overflow, and even with the information available in a monitoring view (I\n> had\n> to rely on a quick hackish extension as I couldn't patch postgres) it's not\n> terribly fun to do this way. On top of that log analyzers like pgBadger\n> could\n> help to highlight such a problem.\n>\n> I don't want to derail this thread so let me know if I should start a\n> distinct\n> discussion for that.\n>\n\nYeah that seems like a good idea.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jan 14, 2022 at 1:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Thu, Jan 13, 2022 at 10:27:31PM +0000, Bossart, Nathan wrote:\n> Thanks for the new patch!\n> \n> +       <para>\n> +        Returns a record of information about the backend's subtransactions.\n> +        The fields returned are <parameter>subxact_count</parameter> identifies\n> +        number of active subtransaction and <parameter>subxact_overflow\n> +        </parameter> shows whether the backend's subtransaction cache is\n> +        overflowed or not.\n> +       </para></entry>\n> +       </para></entry>\n> \n> nitpick: There is an extra \"</para></entry>\" here.\n\nAlso the sentence looks a bit weird.  I think something like that would be\nbetter:\n\n> +        Returns a record of information about the backend's subtransactions.\n> +        The fields returned are <parameter>subxact_count</parameter>, which\n> +        identifies the number of active subtransaction and <parameter>subxact_overflow\n> +        </parameter>, which shows whether the backend's subtransaction cache is\n> +        overflowed or not. Thanks for looking into this, I will work on this next week. \nWhile on the sub-transaction overflow topic, I'm wondering if we should also\nraise a warning (maybe optionally) immediately when a backend overflows (so in\nGetNewTransactionId()).\n\nLike many I previously had to investigate a slowdown due to sub-transaction\noverflow, and even with the information available in a monitoring view (I had\nto rely on a quick hackish extension as I couldn't patch postgres) it's not\nterribly fun to do this way.  On top of that log analyzers like pgBadger could\nhelp to highlight such a problem.\n\nI don't want to derail this thread so let me know if I should start a distinct\ndiscussion for that.\nYeah that seems like a good idea.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 14 Jan 2022 21:18:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> Like many I previously had to investigate a slowdown due to sub-transaction\n> overflow, and even with the information available in a monitoring view (I had\n> to rely on a quick hackish extension as I couldn't patch postgres) it's not\n> terribly fun to do this way. On top of that log analyzers like pgBadger could\n> help to highlight such a problem.\n\nIt feels to me like far too much effort is being invested in fundamentally\nthe wrong direction here. If the subxact overflow business is causing\nreal-world performance problems, let's find a way to fix that, not put\neffort into monitoring tools that do little to actually alleviate anyone's\npain.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jan 2022 11:25:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On 1/14/22, 8:26 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Julien Rouhaud <rjuju123@gmail.com> writes:\r\n>> Like many I previously had to investigate a slowdown due to sub-transaction\r\n>> overflow, and even with the information available in a monitoring view (I had\r\n>> to rely on a quick hackish extension as I couldn't patch postgres) it's not\r\n>> terribly fun to do this way. On top of that log analyzers like pgBadger could\r\n>> help to highlight such a problem.\r\n>\r\n> It feels to me like far too much effort is being invested in fundamentally\r\n> the wrong direction here. If the subxact overflow business is causing\r\n> real-world performance problems, let's find a way to fix that, not put\r\n> effort into monitoring tools that do little to actually alleviate anyone's\r\n> pain.\r\n\r\n+1\r\n\r\nAn easy first step might be to increase PGPROC_MAX_CACHED_SUBXIDS and\r\nNUM_SUBTRANS_BUFFERS. This wouldn't be a long-term solution to all\r\nsuch performance problems, and we'd still probably want the proposed\r\nmonitoring tools, but maybe it'd kick the can down the road a bit.\r\nPerhaps another improvement could be to store the topmost transaction\r\nalong with the parent transaction in the subtransaction log to avoid\r\nthe loop in SubTransGetTopmostTransaction().\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 14 Jan 2022 19:42:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 14, 2022 at 07:42:29PM +0000, Bossart, Nathan wrote:\n> On 1/14/22, 8:26 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> >\n> > It feels to me like far too much effort is being invested in fundamentally\n> > the wrong direction here. If the subxact overflow business is causing\n> > real-world performance problems, let's find a way to fix that, not put\n> > effort into monitoring tools that do little to actually alleviate anyone's\n> > pain.\n> \n> +1\n\nAgreed, it would be better but if that leads to significant work that doesn't\nland in pg15, it would be nice to at least get more monitoring possibilities\nin pg15 to help locate problems in application.\n\n> An easy first step might be to increase PGPROC_MAX_CACHED_SUBXIDS and\n> NUM_SUBTRANS_BUFFERS.\n\nThere's already something proposed for slru sizing:\nhttps://commitfest.postgresql.org/36/2627/. Unfortunately it hasn't been\ncommitted yet despite some popularity. I also don't know how much it improves\nworkloads that hit the overflow issue.\n\n> This wouldn't be a long-term solution to all\n> such performance problems, and we'd still probably want the proposed\n> monitoring tools, but maybe it'd kick the can down the road a bit.\n\nYeah simply increasing PGPROC_MAX_CACHED_SUBXIDS won't really solve the\nproblem. Also the xid cache is already ~30% of the PGPROC size, increasing it\nany further is likely to end up being a loss for everyone that doesn't heavily\nrely on needing more than 64 subtransactions.\n\nThere's also something proposed at\nhttps://www.postgresql.org/message-id/003201d79d7b$189141f0$49b3c5d0$@tju.edu.cn,\nwhich seems to reach some nice improvement without a major redesign of the\nsubtransaction system, but I now realize that apparently no one added it to the\ncommitfest :(\n\n\n", "msg_date": "Sat, 15 Jan 2022 12:15:18 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Jan 14, 2022 at 07:42:29PM +0000, Bossart, Nathan wrote:\n>> On 1/14/22, 8:26 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>>> It feels to me like far too much effort is being invested in fundamentally\n>>> the wrong direction here.\n\n> Agreed, it would be better but if that leads to significant work that doesn't\n> land in pg15, it would be nice to at least get more monitoring possibilities\n> in pg15 to help locate problems in application.\n\nThe discussion just upthread was envisioning not only that we'd add\ninfrastructure for this, but then that other projects would build\nmore infrastructure on top of that. That's an awful lot of work\nthat will become useless --- indeed maybe counterproductive --- once\nwe find an actual fix. I say \"counterproductive\" because I wonder\nwhat compatibility problems we'd have if the eventual fix results in\nfundamental changes in the way things work in this area.\n\nSince it's worked the same way for a lot of years, I'm not impressed\nby arguments that we need to push something into v15.\n\n>> An easy first step might be to increase PGPROC_MAX_CACHED_SUBXIDS and\n>> NUM_SUBTRANS_BUFFERS.\n\nI don't think that's an avenue to a fix. We need some more-fundamental\nrethinking about how this should work. (No, I don't have any ideas\nat the moment.)\n\n> There's also something proposed at\n> https://www.postgresql.org/message-id/003201d79d7b$189141f0$49b3c5d0$@tju.edu.cn,\n> which seems to reach some nice improvement without a major redesign of the\n> subtransaction system, but I now realize that apparently no one added it to the\n> commitfest :(\n\nHmm ... that could win if we're looking up the same subtransaction's\nparent over and over, but I wonder if it wouldn't degrade a lot of\nworkloads too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jan 2022 00:13:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:13:39AM -0500, Tom Lane wrote:\n> \n> The discussion just upthread was envisioning not only that we'd add\n> infrastructure for this, but then that other projects would build\n> more infrastructure on top of that. That's an awful lot of work\n> that will become useless --- indeed maybe counterproductive --- once\n> we find an actual fix. I say \"counterproductive\" because I wonder\n> what compatibility problems we'd have if the eventual fix results in\n> fundamental changes in the way things work in this area.\n\nI'm not sure what you're referring to. If that's the hackish extension I\nmentioned, its goal was to provide exactly what this thread is about so I\nwasn't advocating for additional tooling. If that's about pgBagder, no extra\nwork would be needed: there's already a report about any WARNING/ERROR and such\nfound in the logs so the information would be immediately visible.\n\n> Since it's worked the same way for a lot of years, I'm not impressed\n> by arguments that we need to push something into v15.\n\nWell, people have also been struggling with it for a lot of years, even if they\ndon't always come here to complain about it. And apparently at least 2 people\nalready had to code something similar to be able to find the problematic\ntransactions, so I still think that at least some monitoring improvement would\nbe welcome in v15 if none of the other approach get committed.\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:29:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 14, 2022 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Like many I previously had to investigate a slowdown due to\n> sub-transaction\n> > overflow, and even with the information available in a monitoring view\n> (I had\n> > to rely on a quick hackish extension as I couldn't patch postgres) it's\n> not\n> > terribly fun to do this way. On top of that log analyzers like pgBadger\n> could\n> > help to highlight such a problem.\n>\n> It feels to me like far too much effort is being invested in fundamentally\n> the wrong direction here. If the subxact overflow business is causing\n> real-world performance problems, let's find a way to fix that, not put\n> effort into monitoring tools that do little to actually alleviate anyone's\n> pain.\n>\n\nI don't think it is really a big effort or big change. But I completely\nagree with you that if we can completely resolve this issue then there is\nno point in providing any such status or LOG.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jan 14, 2022 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Julien Rouhaud <rjuju123@gmail.com> writes:\n> Like many I previously had to investigate a slowdown due to sub-transaction\n> overflow, and even with the information available in a monitoring view (I had\n> to rely on a quick hackish extension as I couldn't patch postgres) it's not\n> terribly fun to do this way.  On top of that log analyzers like pgBadger could\n> help to highlight such a problem.\n\nIt feels to me like far too much effort is being invested in fundamentally\nthe wrong direction here.  If the subxact overflow business is causing\nreal-world performance problems, let's find a way to fix that, not put\neffort into monitoring tools that do little to actually alleviate anyone's\npain.I don't think it is really a big effort or big change.  But I completely agree with you that if we can completely resolve this issue then there is no point in providing any such status or LOG.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Jan 2022 09:29:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On 2022-01-14 11:25:45 -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Like many I previously had to investigate a slowdown due to sub-transaction\n> > overflow, and even with the information available in a monitoring view (I had\n> > to rely on a quick hackish extension as I couldn't patch postgres) it's not\n> > terribly fun to do this way. On top of that log analyzers like pgBadger could\n> > help to highlight such a problem.\n> \n> It feels to me like far too much effort is being invested in fundamentally\n> the wrong direction here. If the subxact overflow business is causing\n> real-world performance problems, let's find a way to fix that, not put\n> effort into monitoring tools that do little to actually alleviate anyone's\n> pain.\n\nThere seems to be some agreement on this (I certainly do agree). Thus it seems\nwe should mark the CF entry as rejected?\n\nIt's been failing on cfbot for weeks... https://cirrus-ci.com/task/5289336424890368?logs=docs_build#L347\n\n\n", "msg_date": "Mon, 21 Mar 2022 16:45:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Mar 22, 2022 at 5:15 AM Andres Freund <andres@anarazel.de> wrote:\n\n> There seems to be some agreement on this (I certainly do agree). Thus it seems\n> we should mark the CF entry as rejected?\n>\n> It's been failing on cfbot for weeks... https://cirrus-ci.com/task/5289336424890368?logs=docs_build#L347\n\nI have marked it rejected.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Mar 2022 10:36:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Mar 21, 2022 at 7:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > It feels to me like far too much effort is being invested in fundamentally\n> > the wrong direction here. If the subxact overflow business is causing\n> > real-world performance problems, let's find a way to fix that, not put\n> > effort into monitoring tools that do little to actually alleviate anyone's\n> > pain.\n>\n> There seems to be some agreement on this (I certainly do agree). Thus it seems\n> we should mark the CF entry as rejected?\n\nI don't think I agree with this outcome, for two reasons.\n\nFirst, we're just talking about an extra couple of columns in\npg_stat_activity here, which does not seem like a heavy price to pay.\nI'm not even sure we need two columns; I think we could get down to\none pretty easily. Rough idea: number of cached subtransaction XIDs if\nnot overflowed, else NULL. Or if that's likely to create 0/NULL\nconfusion, then maybe just a Boolean, overflowed or not.\n\nSecond, the problem seems pretty fundamental to me. Shared memory is\nfixed size, so we cannot use it to store an unbounded number of\nsubtransaction IDs. We could perhaps rejigger things to be more\nmemory-efficient in some way, but no matter how many subtransaction\nXIDs you can keep in shared memory, the user can always consume that\nnumber plus one -- unless you allow for ~2^31 in shared memory, which\nseems unrealistic. To me, that means that overflowed snapshots are not\ngoing away. We could make them less painful by rewriting the SLRU\nstuff to be more efficient, and I bet that's possible, but I think\nit's probably hard, or someone would have gotten it done by now. This\nhas been sucking for a long time and I see no evidence that progress\nis imminent. Even if it happens, it is unlikely that it will be a full\nsolution. If it were possible to make SLRU lookups fast enough not to\nmatter, we wouldn't need to have hint bits, but in reality we do have\nthem and attempts to get rid of them have not gone well up until now,\nand in my opinion probably never will.\n\nThe way that I view this problem is that it is relatively rare but\nhard for some users to troubleshoot. I think I've seen it come up\nmultiple times, and judging from the earlier responses on this thread,\nseveral other people here have, too. In my experience, the problem is\ninevitably that someone has a DML statement inside a plpgsql EXCEPTION\nblock inside a plpgsql loop. Concurrently with that, they are running\na lot of queries that look at recently modified data, so that the\noverflowed snapshot trigger SLRU lookups often enough to matter. How\nis a user supposed to identify which backend is causing the problem,\nas things stand today? I have generally given people the advice to go\nfind the DML inside of a plpgsql EXCEPTION block inside of a loop, but\nsome users have trouble doing that. The DBA who is observing the\nperformance problem is not necessarily the developer who wrote all of\nthe PL code, and the PL code may be large and badly formatted and\nthere could be a bunch of EXCEPTION blocks and it might not be clear\nwhich one is the problem. The exception block could be calling another\nfunction or procedure that does the actual DML rather than doing it\ndirectly, and the loop surrounding it might not be in the same\nfunction or procedure but in some other one that calls it, or it could\nbe called repeatedly from the SQL level.\n\nI think I fundamentally disagree with the idea that we should refuse\nto expose instrumentation data because some day the internals might\nchange. If we accepted that argument categorically, we wouldn't have\nthings like backend_xmin or backend_xid in pg_stat_activity, or wait\nevents either, but we do have those things and users find them useful.\nThey suck in the sense that you need to know quite a bit about how the\ninternals work in order to use them to find problems, but people who\nwant to support production PostgreSQL instances have to learn about\nhow those internals work one way or the other because they\ndemonstrably matter. It is absolutely stellar when we can say \"hey, we\ndon't need to have a way for users to see what's going on here\ninternally because they don't ever need to care,\" but once it is\nestablished that they do need to care, we should let them see directly\nthe data they need to care about rather than forcing them to\ntroubleshoot the problem in some more roundabout way like auditing all\nof the code and guessing which part is the problem, or writing custom\ndtrace scripts to run on their production instances.\n\nIf and when it happens that a field like backend_xmin or the new ones\nproposed here are no longer relevant, we can just remove them from the\nmonitoring views. Yeah, that's a backward compatibility break, and\nthere's some pain associated with that. But we have demonstrated that\nwe are perfectly willing to incur the pain associated with adding new\ncolumns when there is new and valuable information to display, and\nthat is equally a compatibility break, in the sense that it has about\nthe same chance of making pg_upgrade fail.\n\nIn short, I think this is a good idea, and if somebody thinks that we\nshould solve the underlying problem instead, I'd like to hear what\npeople think a realistic solution might be. Because to me, it looks\nlike we're refusing to commit a patch that probably took an hour to\nwrite because with 10 years of engineering effort we could *maybe* fix\nthe root cause.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:09:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In short, I think this is a good idea, and if somebody thinks that we\n> should solve the underlying problem instead, I'd like to hear what\n> people think a realistic solution might be. Because to me, it looks\n> like we're refusing to commit a patch that probably took an hour to\n> write because with 10 years of engineering effort we could *maybe* fix\n> the root cause.\n\nMaybe the original patch took an hour to write, but it's sure been\nbikeshedded to death :-(. I was complaining about the total amount\nof attention spent more than the patch itself.\n\nThe patch of record seems to be v4 from 2022-01-13, which was failing\nin cfbot at last report but presumably could be fixed easily. The\nproposed documentation's grammar is pretty shaky, but I don't see\nmuch else wrong in a quick eyeball scan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:41:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 10:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe the original patch took an hour to write, but it's sure been\n> bikeshedded to death :-(. I was complaining about the total amount\n> of attention spent more than the patch itself.\n\nUnfortunately, that problem is not unique to this patch, and even more\nunfortunately, despite all the bikeshedding, we still often get it\nwrong. Catching up from my week off I see that you've fixed not one\nbut two bugs in a patch I thought I'd reviewed half to death. :-(\n\n> The patch of record seems to be v4 from 2022-01-13, which was failing\n> in cfbot at last report but presumably could be fixed easily. The\n> proposed documentation's grammar is pretty shaky, but I don't see\n> much else wrong in a quick eyeball scan.\n\nI can take a crack at improving the documentation. Do you have a view\non the best way to cut this down to a single new column, or the\ndesirability of doing so?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:52:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 10:09:57AM -0500, Robert Haas wrote:\n> On Mon, Mar 21, 2022 at 7:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It feels to me like far too much effort is being invested in fundamentally\n> > > the wrong direction here. If the subxact overflow business is causing\n> > > real-world performance problems, let's find a way to fix that, not put\n> > > effort into monitoring tools that do little to actually alleviate anyone's\n> > > pain.\n> >\n> > There seems to be some agreement on this (I certainly do agree). Thus it seems\n> > we should mark the CF entry as rejected?\n> \n> I don't think I agree with this outcome, for two reasons.\n> \n> First, we're just talking about an extra couple of columns in\n> pg_stat_activity here, which does not seem like a heavy price to pay.\n\nThe most recent patch adds a separate function rather than adding more\ncolumns to pg_stat_activity. I think the complaint about making that\nview wider for infrequently-used columns is entirely valid.\n\n> If and when it happens that a field like backend_xmin or the new ones\n> proposed here are no longer relevant, we can just remove them from the\n> monitoring views. Yeah, that's a backward compatibility break, and\n> there's some pain associated with that. But we have demonstrated that\n> we are perfectly willing to incur the pain associated with adding new\n> columns when there is new and valuable information to display, and\n> that is equally a compatibility break, in the sense that it has about\n> the same chance of making pg_upgrade fail.\n\nWhy would pg_upgrade fail due to new/removed columns in\npg_stat_activity? Do you mean if a user creates a view on top of it?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Nov 2022 09:57:39 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > First, we're just talking about an extra couple of columns in\n> > pg_stat_activity here, which does not seem like a heavy price to pay.\n>\n> The most recent patch adds a separate function rather than adding more\n> columns to pg_stat_activity. I think the complaint about making that\n> view wider for infrequently-used columns is entirely valid.\n\nI guess that's OK. I don't particularly favor that approach here but I\ncan live with it. I agree that too-wide views are annoying, but as far\nas pg_stat_activity goes, that ship has pretty much sailed already,\nand the same is true for a lot of other views. Inventing a one-off\nsolution for this particular case doesn't seem particularly warranted\nto me but, again, I can live with it.\n\n> Why would pg_upgrade fail due to new/removed columns in\n> pg_stat_activity? Do you mean if a user creates a view on top of it?\n\nYes, that is a thing that some people do, and I think it is the most\nlikely way for any changes to the view definition to cause\ncompatibility problems. I could be wrong, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:04:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 14, 2022 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > First, we're just talking about an extra couple of columns in\n> > > pg_stat_activity here, which does not seem like a heavy price to pay.\n> >\n> > The most recent patch adds a separate function rather than adding more\n> > columns to pg_stat_activity. I think the complaint about making that\n> > view wider for infrequently-used columns is entirely valid.\n>\n> I guess that's OK. I don't particularly favor that approach here but I\n> can live with it. I agree that too-wide views are annoying, but as far\n> as pg_stat_activity goes, that ship has pretty much sailed already,\n> and the same is true for a lot of other views. Inventing a one-off\n> solution for this particular case doesn't seem particularly warranted\n> to me but, again, I can live with it.\n>\n>\nI can see putting counts that people would want to use for statistics\nelsewhere but IIUC the whole purpose of \"overflowed\" is to inform someone\nthat their session presently has degraded performance because it has\ncreated too many subtransactions. Just because the \"degraded\" condition\nitself is rare doesn't mean the field \"is my session degraded\" is going to\nbe seldom consulted. In fact, I would rather think it is always briefly\nconsulted to confirm it has the expected value of \"false\" (blank, IMO,\ndon't show anything in that column unless it is exceptional) and the\npresence of a value there would draw attention to the desired fact that\nsomething is wrong and warrants further investigation. The\npg_stat_activity view seems like the perfect place to at least display that\nexception flag.\n\nDavid J.\n\nOn Mon, Nov 14, 2022 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 14, 2022 at 10:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > First, we're just talking about an extra couple of columns in\n> > pg_stat_activity here, which does not seem like a heavy price to pay.\n>\n> The most recent patch adds a separate function rather than adding more\n> columns to pg_stat_activity.  I think the complaint about making that\n> view wider for infrequently-used columns is entirely valid.\n\nI guess that's OK. I don't particularly favor that approach here but I\ncan live with it. I agree that too-wide views are annoying, but as far\nas pg_stat_activity goes, that ship has pretty much sailed already,\nand the same is true for a lot of other views. Inventing a one-off\nsolution for this particular case doesn't seem particularly warranted\nto me but, again, I can live with it.I can see putting counts that people would want to use for statistics elsewhere but IIUC the whole purpose of \"overflowed\" is to inform someone that their session presently has degraded performance because it has created too many subtransactions.  Just because the \"degraded\" condition itself is rare doesn't mean the field \"is my session degraded\" is going to be seldom consulted.  In fact, I would rather think it is always briefly consulted to confirm it has the expected value of \"false\" (blank, IMO, don't show anything in that column unless it is exceptional) and the presence of a value there would draw attention to the desired fact that something is wrong and warrants further investigation.  The pg_stat_activity view seems like the perfect place to at least display that exception flag.David J.", "msg_date": "Mon, 14 Nov 2022 09:17:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 11:18 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> I guess that's OK. I don't particularly favor that approach here but I\n>> can live with it. I agree that too-wide views are annoying, but as far\n>> as pg_stat_activity goes, that ship has pretty much sailed already,\n>> and the same is true for a lot of other views. Inventing a one-off\n>> solution for this particular case doesn't seem particularly warranted\n>> to me but, again, I can live with it.\n>\n> I can see putting counts that people would want to use for statistics elsewhere but IIUC the whole purpose of \"overflowed\" is to inform someone that their session presently has degraded performance because it has created too many subtransactions. Just because the \"degraded\" condition itself is rare doesn't mean the field \"is my session degraded\" is going to be seldom consulted. In fact, I would rather think it is always briefly consulted to confirm it has the expected value of \"false\" (blank, IMO, don't show anything in that column unless it is exceptional) and the presence of a value there would draw attention to the desired fact that something is wrong and warrants further investigation. The pg_stat_activity view seems like the perfect place to at least display that exception flag.\n\nOK, thanks for voting. I take that as +1 for putting it in\npg_stat_activity proper, which is also my preferred approach.\n\nHowever, a slight correction: it doesn't inform you that your session\nhas degraded performance. It informs you that your session may be\ndegrading everyone else's performance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:28:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Making the information available in pg_stat_activity makes it a lot easier\nto identify the pid which has caused the subtran overflow. Debugging\nthrough the app code can be an endless exercise and logging every statement\nin postgresql logs is not practical either. If the overhead of fetching the\ninformation isn't too big, I think we should consider the\nsubtransaction_count and is_overflowed field as potential candidates for\nthe enhancement of pg_stat_activity.\n\n\nRegards\nAmit Singh\n\nOn Mon, Nov 14, 2022 at 11:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Mar 21, 2022 at 7:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It feels to me like far too much effort is being invested in\n> fundamentally\n> > > the wrong direction here. If the subxact overflow business is causing\n> > > real-world performance problems, let's find a way to fix that, not put\n> > > effort into monitoring tools that do little to actually alleviate\n> anyone's\n> > > pain.\n> >\n> > There seems to be some agreement on this (I certainly do agree). Thus it\n> seems\n> > we should mark the CF entry as rejected?\n>\n> I don't think I agree with this outcome, for two reasons.\n>\n> First, we're just talking about an extra couple of columns in\n> pg_stat_activity here, which does not seem like a heavy price to pay.\n> I'm not even sure we need two columns; I think we could get down to\n> one pretty easily. Rough idea: number of cached subtransaction XIDs if\n> not overflowed, else NULL. Or if that's likely to create 0/NULL\n> confusion, then maybe just a Boolean, overflowed or not.\n>\n> Second, the problem seems pretty fundamental to me. Shared memory is\n> fixed size, so we cannot use it to store an unbounded number of\n> subtransaction IDs. We could perhaps rejigger things to be more\n> memory-efficient in some way, but no matter how many subtransaction\n> XIDs you can keep in shared memory, the user can always consume that\n> number plus one -- unless you allow for ~2^31 in shared memory, which\n> seems unrealistic. To me, that means that overflowed snapshots are not\n> going away. We could make them less painful by rewriting the SLRU\n> stuff to be more efficient, and I bet that's possible, but I think\n> it's probably hard, or someone would have gotten it done by now. This\n> has been sucking for a long time and I see no evidence that progress\n> is imminent. Even if it happens, it is unlikely that it will be a full\n> solution. If it were possible to make SLRU lookups fast enough not to\n> matter, we wouldn't need to have hint bits, but in reality we do have\n> them and attempts to get rid of them have not gone well up until now,\n> and in my opinion probably never will.\n>\n> The way that I view this problem is that it is relatively rare but\n> hard for some users to troubleshoot. I think I've seen it come up\n> multiple times, and judging from the earlier responses on this thread,\n> several other people here have, too. In my experience, the problem is\n> inevitably that someone has a DML statement inside a plpgsql EXCEPTION\n> block inside a plpgsql loop. Concurrently with that, they are running\n> a lot of queries that look at recently modified data, so that the\n> overflowed snapshot trigger SLRU lookups often enough to matter. How\n> is a user supposed to identify which backend is causing the problem,\n> as things stand today? I have generally given people the advice to go\n> find the DML inside of a plpgsql EXCEPTION block inside of a loop, but\n> some users have trouble doing that. The DBA who is observing the\n> performance problem is not necessarily the developer who wrote all of\n> the PL code, and the PL code may be large and badly formatted and\n> there could be a bunch of EXCEPTION blocks and it might not be clear\n> which one is the problem. The exception block could be calling another\n> function or procedure that does the actual DML rather than doing it\n> directly, and the loop surrounding it might not be in the same\n> function or procedure but in some other one that calls it, or it could\n> be called repeatedly from the SQL level.\n>\n> I think I fundamentally disagree with the idea that we should refuse\n> to expose instrumentation data because some day the internals might\n> change. If we accepted that argument categorically, we wouldn't have\n> things like backend_xmin or backend_xid in pg_stat_activity, or wait\n> events either, but we do have those things and users find them useful.\n> They suck in the sense that you need to know quite a bit about how the\n> internals work in order to use them to find problems, but people who\n> want to support production PostgreSQL instances have to learn about\n> how those internals work one way or the other because they\n> demonstrably matter. It is absolutely stellar when we can say \"hey, we\n> don't need to have a way for users to see what's going on here\n> internally because they don't ever need to care,\" but once it is\n> established that they do need to care, we should let them see directly\n> the data they need to care about rather than forcing them to\n> troubleshoot the problem in some more roundabout way like auditing all\n> of the code and guessing which part is the problem, or writing custom\n> dtrace scripts to run on their production instances.\n>\n> If and when it happens that a field like backend_xmin or the new ones\n> proposed here are no longer relevant, we can just remove them from the\n> monitoring views. Yeah, that's a backward compatibility break, and\n> there's some pain associated with that. But we have demonstrated that\n> we are perfectly willing to incur the pain associated with adding new\n> columns when there is new and valuable information to display, and\n> that is equally a compatibility break, in the sense that it has about\n> the same chance of making pg_upgrade fail.\n>\n> In short, I think this is a good idea, and if somebody thinks that we\n> should solve the underlying problem instead, I'd like to hear what\n> people think a realistic solution might be. Because to me, it looks\n> like we're refusing to commit a patch that probably took an hour to\n> write because with 10 years of engineering effort we could *maybe* fix\n> the root cause.\n>\n\n-- \n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\nMaking the information available in pg_stat_activity makes it a lot easier to identify the pid which has caused the subtran overflow. Debugging through the app code can be an endless exercise and logging every statement in postgresql logs is not practical either. If the overhead of fetching the information isn't too big, I think we should consider the subtransaction_count and is_overflowed field as potential candidates for the enhancement of pg_stat_activity.RegardsAmit SinghOn Mon, Nov 14, 2022 at 11:10 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Mar 21, 2022 at 7:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > It feels to me like far too much effort is being invested in fundamentally\n> > the wrong direction here.  If the subxact overflow business is causing\n> > real-world performance problems, let's find a way to fix that, not put\n> > effort into monitoring tools that do little to actually alleviate anyone's\n> > pain.\n>\n> There seems to be some agreement on this (I certainly do agree). Thus it seems\n> we should mark the CF entry as rejected?\n\nI don't think I agree with this outcome, for two reasons.\n\nFirst, we're just talking about an extra couple of columns in\npg_stat_activity here, which does not seem like a heavy price to pay.\nI'm not even sure we need two columns; I think we could get down to\none pretty easily. Rough idea: number of cached subtransaction XIDs if\nnot overflowed, else NULL. Or if that's likely to create 0/NULL\nconfusion, then maybe just a Boolean, overflowed or not.\n\nSecond, the problem seems pretty fundamental to me. Shared memory is\nfixed size, so we cannot use it to store an unbounded number of\nsubtransaction IDs. We could perhaps rejigger things to be more\nmemory-efficient in some way, but no matter how many subtransaction\nXIDs you can keep in shared memory, the user can always consume that\nnumber plus one --  unless you allow for ~2^31 in shared memory, which\nseems unrealistic. To me, that means that overflowed snapshots are not\ngoing away. We could make them less painful by rewriting the SLRU\nstuff to be more efficient, and I bet that's possible, but I think\nit's probably hard, or someone would have gotten it done by now. This\nhas been sucking for a long time and I see no evidence that progress\nis imminent. Even if it happens, it is unlikely that it will be a full\nsolution. If it were possible to make SLRU lookups fast enough not to\nmatter, we wouldn't need to have hint bits, but in reality we do have\nthem and attempts to get rid of them have not gone well up until now,\nand in my opinion probably never will.\n\nThe way that I view this problem is that it is relatively rare but\nhard for some users to troubleshoot. I think I've seen it come up\nmultiple times, and judging from the earlier responses on this thread,\nseveral other people here have, too. In my experience, the problem is\ninevitably that someone has a DML statement inside a plpgsql EXCEPTION\nblock inside a plpgsql loop. Concurrently with that, they are running\na lot of queries that look at recently modified data, so that the\noverflowed snapshot trigger SLRU lookups often enough to matter. How\nis a user supposed to identify which backend is causing the problem,\nas things stand today? I have generally given people the advice to go\nfind the DML inside of a plpgsql EXCEPTION block inside of a loop, but\nsome users have trouble doing that. The DBA who is observing the\nperformance problem is not necessarily the developer who wrote all of\nthe PL code, and the PL code may be large and badly formatted and\nthere could be a bunch of EXCEPTION blocks and it might not be clear\nwhich one is the problem. The exception block could be calling another\nfunction or procedure that does the actual DML rather than doing it\ndirectly, and the loop surrounding it might not be in the same\nfunction or procedure but in some other one that calls it, or it could\nbe called repeatedly from the SQL level.\n\nI think I fundamentally disagree with the idea that we should refuse\nto expose instrumentation data because some day the internals might\nchange. If we accepted that argument categorically, we wouldn't have\nthings like backend_xmin or backend_xid in pg_stat_activity, or wait\nevents either, but we do have those things and users find them useful.\nThey suck in the sense that you need to know quite a bit about how the\ninternals work in order to use them to find problems, but people who\nwant to support production PostgreSQL instances have to learn about\nhow those internals work one way or the other because they\ndemonstrably matter. It is absolutely stellar when we can say \"hey, we\ndon't need to have a way for users to see what's going on here\ninternally because they don't ever need to care,\" but once it is\nestablished that they do need to care, we should let them see directly\nthe data they need to care about rather than forcing them to\ntroubleshoot the problem in some more roundabout way like auditing all\nof the code and guessing which part is the problem, or writing custom\ndtrace scripts to run on their production instances.\n\nIf and when it happens that a field like backend_xmin or the new ones\nproposed here are no longer relevant, we can just remove them from the\nmonitoring views. Yeah, that's a backward compatibility break, and\nthere's some pain associated with that. But we have demonstrated that\nwe are perfectly willing to incur the pain associated with adding new\ncolumns when there is new and valuable information to display, and\nthat is equally a compatibility break, in the sense that it has about\nthe same chance of making pg_upgrade fail.\n\nIn short, I think this is a good idea, and if somebody thinks that we\nshould solve the underlying problem instead, I'd like to hear what\npeople think a realistic solution might be. Because to me, it looks\nlike we're refusing to commit a patch that probably took an hour to\nwrite because with 10 years of engineering effort we could *maybe* fix\nthe root cause.\n \n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Nov 2022 00:34:25 +0800", "msg_from": "Amit Singh <amitksingh.mumbai@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 11:35 AM Amit Singh <amitksingh.mumbai@gmail.com> wrote:\n> Making the information available in pg_stat_activity makes it a lot easier to identify the pid which has caused the subtran overflow. Debugging through the app code can be an endless exercise and logging every statement in postgresql logs is not practical either. If the overhead of fetching the information isn't too big, I think we should consider the subtransaction_count and is_overflowed field as potential candidates for the enhancement of pg_stat_activity.\n\nThe overhead of fetching the information is not large, but Justin is\nconcerned about the effect on the display width. I feel that's kind of\na lost cause because it's so wide already anyway, but I don't see a\nreason why we need *two* new columns. Can't we get by with just one?\nIt could be overflowed true/false, or it could be the number of\nsubtransaction XIDs but with NULL instead if overflowed.\n\nDo you have a view on this point?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:41:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 9:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 14, 2022 at 11:35 AM Amit Singh <amitksingh.mumbai@gmail.com>\n> wrote:\n> > Making the information available in pg_stat_activity makes it a lot\n> easier to identify the pid which has caused the subtran overflow. Debugging\n> through the app code can be an endless exercise and logging every statement\n> in postgresql logs is not practical either. If the overhead of fetching the\n> information isn't too big, I think we should consider the\n> subtransaction_count and is_overflowed field as potential candidates for\n> the enhancement of pg_stat_activity.\n>\n> The overhead of fetching the information is not large, but Justin is\n> concerned about the effect on the display width. I feel that's kind of\n> a lost cause because it's so wide already anyway, but I don't see a\n> reason why we need *two* new columns. Can't we get by with just one?\n> It could be overflowed true/false, or it could be the number of\n> subtransaction XIDs but with NULL instead if overflowed.\n>\n> Do you have a view on this point?\n>\n>\nNULL when overflowed seems like the opposite of the desired effect, calling\nattention to the exceptional status. Make it a text column and write\n\"overflow\" or \"###\" as appropriate. Anyone using the column is going to\nend up wanting to special-case overflow anyway and number-to-text\nconversion aside from overflow is simple enough if a number, and not just a\ndisplay label, is needed.\n\nDavid J.\n\nOn Mon, Nov 14, 2022 at 9:41 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 14, 2022 at 11:35 AM Amit Singh <amitksingh.mumbai@gmail.com> wrote:\n> Making the information available in pg_stat_activity makes it a lot easier to identify the pid which has caused the subtran overflow. Debugging through the app code can be an endless exercise and logging every statement in postgresql logs is not practical either. If the overhead of fetching the information isn't too big, I think we should consider the subtransaction_count and is_overflowed field as potential candidates for the enhancement of pg_stat_activity.\n\nThe overhead of fetching the information is not large, but Justin is\nconcerned about the effect on the display width. I feel that's kind of\na lost cause because it's so wide already anyway, but I don't see a\nreason why we need *two* new columns. Can't we get by with just one?\nIt could be overflowed true/false, or it could be the number of\nsubtransaction XIDs but with NULL instead if overflowed.\n\nDo you have a view on this point?NULL when overflowed seems like the opposite of the desired effect, calling attention to the exceptional status.  Make it a text column and write \"overflow\" or \"###\" as appropriate.  Anyone using the column is going to end up wanting to special-case overflow anyway and number-to-text conversion aside from overflow is simple enough if a number, and not just a display label, is needed.David J.", "msg_date": "Mon, 14 Nov 2022 09:48:21 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Nov 14, 2022 at 9:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> The overhead of fetching the information is not large, but Justin is\n>> concerned about the effect on the display width. I feel that's kind of\n>> a lost cause because it's so wide already anyway, but I don't see a\n>> reason why we need *two* new columns. Can't we get by with just one?\n>> It could be overflowed true/false, or it could be the number of\n>> subtransaction XIDs but with NULL instead if overflowed.\n\n> NULL when overflowed seems like the opposite of the desired effect, calling\n> attention to the exceptional status. Make it a text column and write\n> \"overflow\" or \"###\" as appropriate. Anyone using the column is going to\n> end up wanting to special-case overflow anyway and number-to-text\n> conversion aside from overflow is simple enough if a number, and not just a\n> display label, is needed.\n\nI'd vote for just overflowed true/false. Why do people need to know\nthe exact number of subtransactions? (If there is a use-case, that\nwould definitely be material for an auxiliary function instead of a\nview column.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Nov 2022 12:29:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 12:29:58 -0500, Tom Lane wrote:\n> I'd vote for just overflowed true/false. Why do people need to know\n> the exact number of subtransactions? (If there is a use-case, that\n> would definitely be material for an auxiliary function instead of a\n> view column.)\n\nI'd go the other way. It's pretty unimportant whether it overflowed, it's\nimportant how many subtxns there are. The cases where overflowing causes real\nproblems are when there's many thousand subtxns - which one can't judge just\nfrom suboverflowed alone. Nor can monitoring a boolean tell you whether you're\ncreeping closer to the danger zone.\n\nMonitoring the number also has the advantage that we'd not embed an\nimplementation detail (\"suboverflowed\") in a view. The number of\nsubtransactions is far less prone to changing than the way we implement\nsubtransactions in the procarray.\n\nBut TBH, to me this still is something that'd be better addressed with a\ntracepoint.\n\nI don't buy the argument that the ship of pg_stat_activity width has entirely\nsailed. A session still fits onto a reasonably sized terminal in \\x output -\nbut not much longer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 09:47:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd go the other way. It's pretty unimportant whether it overflowed, it's\n> important how many subtxns there are. The cases where overflowing causes real\n> problems are when there's many thousand subtxns - which one can't judge just\n> from suboverflowed alone. Nor can monitoring a boolean tell you whether you're\n> creeping closer to the danger zone.\n\nThis is the opposite of what I believe to be true. I thought the\nproblem is that once a single backend overflows the subxid array, all\nsnapshots have to be created suboverflowed, and this makes visibility\nchecking more expensive. It's my impression that for some users this\ncreates and extremely steep performance cliff: the difference between\nno backends overflowing and 1 backend overflowing is large, but\nwhether you are close to the limit makes no difference as long as you\ndon't reach it, and once you've passed it it makes little difference\nhow far past it you go.\n\n> But TBH, to me this still is something that'd be better addressed with a\n> tracepoint.\n\nI think that makes it far, far less accessible to the typical user.\n\n> I don't buy the argument that the ship of pg_stat_activity width has entirely\n> sailed. A session still fits onto a reasonably sized terminal in \\x output -\n> but not much longer.\n\nI guess it depends on what you mean by reasonable. For me, without \\x,\nit wraps across five times on an idle system with the 24x80 window\nthat I normally use, and even if I full screen my terminal window, it\nstill wraps around. With \\x, sure, it fits, both only if the query is\nshorter than the width of my window minus ~25 characters, which isn't\nthat likely to be the case IME because users write long queries. I\ndon't even try to use \\x most of the time because the queries are\nlikely to be long enough to destroy any benefit, but it all depends on\nhow big your terminal is and how long your queries are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 13:43:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 14, 2022 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd go the other way. It's pretty unimportant whether it overflowed, it's\n> > important how many subtxns there are. The cases where overflowing causes\n> real\n> > problems are when there's many thousand subtxns - which one can't judge\n> just\n> > from suboverflowed alone. Nor can monitoring a boolean tell you whether\n> you're\n> > creeping closer to the danger zone.\n>\n> This is the opposite of what I believe to be true. I thought the\n> problem is that once a single backend overflows the subxid array, all\n> snapshots have to be created suboverflowed, and this makes visibility\n> checking more expensive. It's my impression that for some users this\n> creates and extremely steep performance cliff: the difference between\n> no backends overflowing and 1 backend overflowing is large, but\n> whether you are close to the limit makes no difference as long as you\n> don't reach it, and once you've passed it it makes little difference\n> how far past it you go.\n>\n>\nAssuming getting an actual count value to print is fairly cheap, or even a\nsunk cost if you are going to report overflow, I don't see why we wouldn't\nwant to provide the more detailed data.\n\nMy concern, through ignorance, with reporting a number is that it would\nhave no context in the query result itself. If I have two rows with\nnumbers, one with 10 and one with 1,000, is the two orders of magnitude of\nthe second number important or does overflow happen at, say, 65,000 and so\nboth numbers are exceedingly small and thus not worth worrying about? That\ncan be handled by documentation just fine, so long as the reference number\nin question isn't a per-session variable. Otherwise, showing some kind of\n\"percent of max\" computation seems warranted. In which case maybe the two\npresentation outputs would be:\n\n1,000 (13%)\nOverflowed\n\nDavid J.\n\nOn Mon, Nov 14, 2022 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 14, 2022 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd go the other way. It's pretty unimportant whether it overflowed, it's\n> important how many subtxns there are. The cases where overflowing causes real\n> problems are when there's many thousand subtxns - which one can't judge just\n> from suboverflowed alone. Nor can monitoring a boolean tell you whether you're\n> creeping closer to the danger zone.\n\nThis is the opposite of what I believe to be true. I thought the\nproblem is that once a single backend overflows the subxid array, all\nsnapshots have to be created suboverflowed, and this makes visibility\nchecking more expensive. It's my impression that for some users this\ncreates and extremely steep performance cliff: the difference between\nno backends overflowing and 1 backend overflowing is large, but\nwhether you are close to the limit makes no difference as long as you\ndon't reach it, and once you've passed it it makes little difference\nhow far past it you go.Assuming getting an actual count value to print is fairly cheap, or even a sunk cost if you are going to report overflow, I don't see why we wouldn't want to provide the more detailed data.My concern, through ignorance, with reporting a number is that it would have no context in the query result itself.  If I have two rows with numbers, one with 10 and one with 1,000, is the two orders of magnitude of the second number important or does overflow happen at, say, 65,000 and so both numbers are exceedingly small and thus not worth worrying about?  That can be handled by documentation just fine, so long as the reference number in question isn't a per-session variable.  Otherwise, showing some kind of \"percent of max\" computation seems warranted.  In which case maybe the two presentation outputs would be:1,000 (13%)OverflowedDavid J.", "msg_date": "Mon, 14 Nov 2022 12:16:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 2:17 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Assuming getting an actual count value to print is fairly cheap, or even a sunk cost if you are going to report overflow, I don't see why we wouldn't want to provide the more detailed data.\n>\n> My concern, through ignorance, with reporting a number is that it would have no context in the query result itself. If I have two rows with numbers, one with 10 and one with 1,000, is the two orders of magnitude of the second number important or does overflow happen at, say, 65,000 and so both numbers are exceedingly small and thus not worth worrying about? That can be handled by documentation just fine, so long as the reference number in question isn't a per-session variable. Otherwise, showing some kind of \"percent of max\" computation seems warranted. In which case maybe the two presentation outputs would be:\n>\n> 1,000 (13%)\n> Overflowed\n\nI think the idea of cramming a bunch of stuff into a text field is\ndead on arrival. Data types are a wonderful invention because they let\npeople write queries, say looking for backends where overflowed =\ntrue, or backends where subxids > 64. that gets much harder if the\nquery has to try to make sense of some random text representation.\n\nIf both values are separately important, then we need to report them\nboth, and the only question is whether to do that in pg_stat_activity\nor via a side mechanism. What I don't yet understand is why that's\ntrue. I think the important question is whether there are overflowed\nbackends, and Andres thinks it's how many subtransaction XIDs there\nare, so there is a reasonable chance that both things actually matter\nin separate scenarios. But I only know the scenario in which\noverflowed matters, not the one in which subtransaction XID count\nmatters.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 14:32:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 14, 2022 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n>> I'd go the other way. It's pretty unimportant whether it overflowed, it's\n>> important how many subtxns there are. The cases where overflowing causes real\n>> problems are when there's many thousand subtxns - which one can't judge just\n>> from suboverflowed alone. Nor can monitoring a boolean tell you whether you're\n>> creeping closer to the danger zone.\n\n> This is the opposite of what I believe to be true. I thought the\n> problem is that once a single backend overflows the subxid array, all\n> snapshots have to be created suboverflowed, and this makes visibility\n> checking more expensive.\n\nYeah, that's what I thought too. Andres, please enlarge ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Nov 2022 16:03:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 13:43:41 -0500, Robert Haas wrote:\n> On Mon, Nov 14, 2022 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd go the other way. It's pretty unimportant whether it overflowed, it's\n> > important how many subtxns there are. The cases where overflowing causes real\n> > problems are when there's many thousand subtxns - which one can't judge just\n> > from suboverflowed alone. Nor can monitoring a boolean tell you whether you're\n> > creeping closer to the danger zone.\n> \n> This is the opposite of what I believe to be true. I thought the\n> problem is that once a single backend overflows the subxid array, all\n> snapshots have to be created suboverflowed, and this makes visibility\n> checking more expensive. It's my impression that for some users this\n> creates and extremely steep performance cliff: the difference between\n> no backends overflowing and 1 backend overflowing is large, but\n> whether you are close to the limit makes no difference as long as you\n> don't reach it, and once you've passed it it makes little difference\n> how far past it you go.\n\nFirst, it's not good to have a cliff that you can't see coming - presumbly\nyou'd want to warn *before* you regularly reach PGPROC_MAX_CACHED_SUBXIDS\nsubxids, rather when the shit has hit the fan already.\n\nIMO the number matters a lot when analyzing why this is happening / how to\nreact. A session occasionally reaching 65 subxids might be tolerable and not\nnecessarily indicative of a bug. But 100k subxids is something that one just\ncan't accept.\n\n\nPerhaps this would better be tackled by a new \"visibility\" view. It could show\n- number of sessions with a snapshot\n- max age of backend xmin\n- pid with max backend xmin\n- number of sessions that suboverflowed\n- pid of the session with the most subxids\n- age of the oldest prepared xact\n- age of the oldest slot\n- age of the oldest walsender\n- ...\n\nPerhaps implemented in SQL, with new functions for accessing the properties we\ndon't expose today. That'd address the pg_stat_activity width, while still\nallowing very granular access when necessary. And provide insight into\nsomething that's way to hard to query right now.\n\n\n> > I don't buy the argument that the ship of pg_stat_activity width has entirely\n> > sailed. A session still fits onto a reasonably sized terminal in \\x output -\n> > but not much longer.\n> \n> I guess it depends on what you mean by reasonable. For me, without \\x,\n> it wraps across five times on an idle system with the 24x80 window\n> that I normally use, and even if I full screen my terminal window, it\n> still wraps around. With \\x, sure, it fits, both only if the query is\n> shorter than the width of my window minus ~25 characters, which isn't\n> that likely to be the case IME because users write long queries.\n>\n> I don't even try to use \\x most of the time because the queries are likely\n> to be long enough to destroy any benefit, but it all depends on how big your\n> terminal is and how long your queries are.\n\nI pretty much always use less with -S/--chop-long-lines (via $LESS), otherwise\nI find psql to be pretty hard to use.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 13:17:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Nov 15, 2022 at 2:47 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> First, it's not good to have a cliff that you can't see coming - presumbly\n> you'd want to warn *before* you regularly reach PGPROC_MAX_CACHED_SUBXIDS\n> subxids, rather when the shit has hit the fan already.\n\nI agree with the point that it is good to have a way to know that the\nproblem is about to happen. So for that reason, we should show the\nsubtransaction count. With showing count user can exactly know if\nthere are some sessions that could create problems in near future and\nmay take some action before the problem actually happens.\n\n> IMO the number matters a lot when analyzing why this is happening / how to\n> react. A session occasionally reaching 65 subxids might be tolerable and not\n> necessarily indicative of a bug. But 100k subxids is something that one just\n> can't accept.\n\nActually, we will see the problem as soon as it has crossed 64 because\nafter that for any visibility checking we need to check the SLRU. So\nI feel both count and overflow are important. Count to know that we\nare heading towards overflow and overflow to know that it has already\nhappened.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 10:53:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 10:18 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\n>> Do you have a view on this point?\n>>\n>\n> NULL when overflowed seems like the opposite of the desired effect, calling attention to the exceptional status. Make it a text column and write \"overflow\" or \"###\" as appropriate. Anyone using the column is going to end up wanting to special-case overflow anyway and number-to-text conversion aside from overflow is simple enough if a number, and not just a display label, is needed.\n\n+1, if we are interested to add only one column then this could be the\nbest way to show.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 10:55:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 4:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Perhaps this would better be tackled by a new \"visibility\" view. It could show\n> - number of sessions with a snapshot\n> - max age of backend xmin\n> - pid with max backend xmin\n> - number of sessions that suboverflowed\n> - pid of the session with the most subxids\n> - age of the oldest prepared xact\n> - age of the oldest slot\n> - age of the oldest walsender\n> - ...\n>\n> Perhaps implemented in SQL, with new functions for accessing the properties we\n> don't expose today. That'd address the pg_stat_activity width, while still\n> allowing very granular access when necessary. And provide insight into\n> something that's way to hard to query right now.\n\nI wouldn't be against a pg_stat_visibility view, but I don't think I'd\nwant it to just output a single summary row. I think we really need to\ngive people an easy way to track down which session is the problem;\nthe existence of the problem is already obvious from the SLRU-related\nwait events.\n\nIf we moved backend_xid and backend_xmin out to this new view, added\nthese subtransaction-related things, and allowed for a join on pid, I\ncould get behind that, but it's probably a bit more painful for users\nthan just accepting that the view is going to further outgrow the\nterminal window. It might be better in the long term because perhaps\nwe're going to find more things that would fit into this new view, but\nI don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 09:04:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 09:04:25 -0500, Robert Haas wrote:\n> On Mon, Nov 14, 2022 at 4:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps this would better be tackled by a new \"visibility\" view. It could show\n> > - number of sessions with a snapshot\n> > - max age of backend xmin\n> > - pid with max backend xmin\n> > - number of sessions that suboverflowed\n> > - pid of the session with the most subxids\n> > - age of the oldest prepared xact\n> > - age of the oldest slot\n> > - age of the oldest walsender\n> > - ...\n> >\n> > Perhaps implemented in SQL, with new functions for accessing the properties we\n> > don't expose today. That'd address the pg_stat_activity width, while still\n> > allowing very granular access when necessary. And provide insight into\n> > something that's way to hard to query right now.\n> \n> I wouldn't be against a pg_stat_visibility view, but I don't think I'd\n> want it to just output a single summary row.\n\nI think it'd be more helpful to just have a single row (or maybe a fixed\nnumber of rows) - from what I've observed the main problem people have is\ncondensing the available information, rather than not having information\navailable at all.\n\n\n> I think we really need to\n> give people an easy way to track down which session is the problem;\n> the existence of the problem is already obvious from the SLRU-related\n> wait events.\n\nHence the suggestion to show the pid of the session with the most subxacts. We\nprobably also should add a bunch of accessor functions for people that want\nmore detail... But just seeing in one place what's problematic would be the\nbig get, the rest will be a small percentage of users IME.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:29:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Nov 15, 2022 at 7:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 14, 2022 at 4:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps this would better be tackled by a new \"visibility\" view. It could show\n> > - number of sessions with a snapshot\n> > - max age of backend xmin\n> > - pid with max backend xmin\n> > - number of sessions that suboverflowed\n> > - pid of the session with the most subxids\n> > - age of the oldest prepared xact\n> > - age of the oldest slot\n> > - age of the oldest walsender\n> > - ...\n> >\n> > Perhaps implemented in SQL, with new functions for accessing the properties we\n> > don't expose today. That'd address the pg_stat_activity width, while still\n> > allowing very granular access when necessary. And provide insight into\n> > something that's way to hard to query right now.\n>\n> I wouldn't be against a pg_stat_visibility view, but I don't think I'd\n> want it to just output a single summary row. I think we really need to\n> give people an easy way to track down which session is the problem;\n> the existence of the problem is already obvious from the SLRU-related\n> wait events.\n>\n\nEven I feel per backend-wise information would be more useful and easy\nto use instead of a single summary row. I think It's fine to create a\nnew view if we do not want to add more members to the existing view.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:01:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Nov 15, 2022 at 2:29 PM Andres Freund <andres@anarazel.de> wrote:\n> Hence the suggestion to show the pid of the session with the most subxacts. We\n> probably also should add a bunch of accessor functions for people that want\n> more detail... But just seeing in one place what's problematic would be the\n> big get, the rest will be a small percentage of users IME.\n\nI guess all I can say here is that my experience differs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 12:59:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Nov 14, 2022 at 10:09:57AM -0500, Robert Haas wrote:\n> I think I fundamentally disagree with the idea that we should refuse\n> to expose instrumentation data because some day the internals might\n> change. If we accepted that argument categorically, we wouldn't have\n> things like backend_xmin or backend_xid in pg_stat_activity, or wait\n> events either, but we do have those things and users find them useful.\n> They suck in the sense that you need to know quite a bit about how the\n> internals work in order to use them to find problems, but people who\n> want to support production PostgreSQL instances have to learn about\n> how those internals work one way or the other because they\n> demonstrably matter. It is absolutely stellar when we can say \"hey, we\n\nI originally thought having this value in pg_stat_activity was overkill,\nbut seeing the other internal/warning columns in that view, I think it\nmakes sense. Oddly, is our 64 snapshot performance limit even\ndocumented anywhere? I know it is in Simon's patch I am working on.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 14:01:27 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:01 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I originally thought having this value in pg_stat_activity was overkill,\n> but seeing the other internal/warning columns in that view, I think it\n> makes sense. Oddly, is our 64 snapshot performance limit even\n> documented anywhere? I know it is in Simon's patch I am working on.\n\nIf it is, I'm not aware of it. We often don't document things that are\nas internal as that.\n\nOne thing that I'd really like to see better documented is exactly\nwhat it is that causes a problem. But first we'd have to understand it\nourselves. It's not as simple as \"if you have more than 64 subxacts in\nany top-level xact, kiss performance good-bye!\" because for there to\nbe a problem, at least one backend (and probably many) have to take\nsnapshots that include that see that overflowed subxact cache and thus\nget marked suboverflowed. Then after that, those snapshots have to be\nused often enough that the additional visibility-checking cost becomes\na problem. But it's also not good enough to just use those snapshots\nagainst any old tuples, because tuples that are older than the\nsnapshot's xmin aren't going to cause additional lookups, nor are\ntuples newer than the snapshot's xmax.\n\nSo it feels a bit complicated to me to think through the workload\nwhere this really hurts. What I'm imagining is that you need a\nrelatively long-running transaction that overflows its subxact\nlimitation but then doesn't commit, so that lots of other backends get\noverflowed snapshots, and also so that the xmin and xmax of the\nsnapshots being taken get further apart. Or maybe you can have a\nseries short-running transactions that each overflow their subxact\ncache briefly, but they overlap, so that there's usually at least 1\naround in that state, but in that case I think you need a separate\nlong-running transaction to push xmin and xmax further apart. Either\nway, the backends that get the overflowed snapshots then need to go\nlook at some table data that's been recently modified, so that there\nare xmin and xmax values newer than the snapshot's xmin.\n\nIntuitively, I feel like this should be pretty rare, and largely\navoidable if you just don't use long-running transactions, which is a\ngood thing to avoid for other reasons anyway. But there may be more to\nit than I'm realizing, because I've seen customers hit this issue\nmultiple times. I wonder whether there's some subtlety to the\ntriggering conditions that I'm not fully understanding.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:25:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2022-11-23 15:25:39 -0500, Robert Haas wrote:\n> One thing that I'd really like to see better documented is exactly\n> what it is that causes a problem. But first we'd have to understand it\n> ourselves. It's not as simple as \"if you have more than 64 subxacts in\n> any top-level xact, kiss performance good-bye!\" because for there to\n> be a problem, at least one backend (and probably many) have to take\n> snapshots that include that see that overflowed subxact cache and thus\n> get marked suboverflowed. Then after that, those snapshots have to be\n> used often enough that the additional visibility-checking cost becomes\n> a problem. But it's also not good enough to just use those snapshots\n> against any old tuples, because tuples that are older than the\n> snapshot's xmin aren't going to cause additional lookups, nor are\n> tuples newer than the snapshot's xmax.\n\nIndeed. This is why I was thinking that just alerting for overflowed xact\nisn't particularly helpful. You really want to see how much they overflow and\nhow often.\n\nBut even that might not be that helpful. Perhaps what we actually need is an\naggregate measure showing the time spent doing subxact lookups due to\noverflowed snapshots? Seeing a substantial amount of time spent doing subxact\nlookups would be much more accurate call to action than seeing a that some\nsessions have a lot of subxacts.\n\n\n> Intuitively, I feel like this should be pretty rare, and largely\n> avoidable if you just don't use long-running transactions, which is a\n> good thing to avoid for other reasons anyway.\n\nI think they're just not always avoidable, even in a very well operated\nsystem.\n\n\nI wonder if we could lower the impact of suboverflowed snapshots by improving\nthe representation in PGPROC and SnapshotData. What if we\n\na) Recorded the min and max assigned subxid in PGPROC\n\nb) Instead of giving up in GetSnapshotData() once we see a suboverflowed\n PGPROC, store the min/max subxid of the proc in SnapshotData. We could\n reliably \"steal\" space for that from ->subxip, as we won't need to store\n subxids for that proc.\n\nc) When determining visibility with a suboverflowed snapshot we use the\n ranges from b) to check whether we need to do a subtrans lookup. I think\n that'll often prevent subtrans lookups.\n\nd) If we encounter a subxid whose parent is in progress and not in ->subxid,\n and subxcnt isn't the max, add that subxid to subxip. That's not free\n because we'd basically need to do an insertion sort, but likely still a lot\n cheaper than doing repeated subtrans lookups.\n\nI think we'd just need a one or two additional fields in SnapshotData.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:56:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Thu, Nov 24, 2022 at 2:26 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Indeed. This is why I was thinking that just alerting for overflowed xact\n> isn't particularly helpful. You really want to see how much they overflow and\n> how often.\n\nI think the way of monitoring the subtransaction count and overflow\nstatus is helpful at least for troubleshooting purposes. By regularly\nmonitoring user will know which backend(pid) is particularly using\nmore subtransactions and prone to overflow and which backends are\nactually frequently causing sub-overflow.\n\n> I think they're just not always avoidable, even in a very well operated\n> system.\n>\n>\n> I wonder if we could lower the impact of suboverflowed snapshots by improving\n> the representation in PGPROC and SnapshotData. What if we\n>\n> a) Recorded the min and max assigned subxid in PGPROC\n>\n> b) Instead of giving up in GetSnapshotData() once we see a suboverflowed\n> PGPROC, store the min/max subxid of the proc in SnapshotData. We could\n> reliably \"steal\" space for that from ->subxip, as we won't need to store\n> subxids for that proc.\n>\n> c) When determining visibility with a suboverflowed snapshot we use the\n> ranges from b) to check whether we need to do a subtrans lookup. I think\n> that'll often prevent subtrans lookups.\n>\n> d) If we encounter a subxid whose parent is in progress and not in ->subxid,\n> and subxcnt isn't the max, add that subxid to subxip. That's not free\n> because we'd basically need to do an insertion sort, but likely still a lot\n> cheaper than doing repeated subtrans lookups.\n>\n> I think we'd just need a one or two additional fields in SnapshotData.\n\n+1\n\nI think this approach will be helpful in many cases, especially when\nonly some of the backend is creating sub-overflow and impacting\noverall system performance. Now, most of the xids especially the top\nxid will not fall in that range (unless that sub-overflowing backend\nis constantly generating subxids and increasing its range) and the\nlookups for that xids can be done directly in the snapshot's xip\narray.\n\nOn another thought, in XidInMVCCSnapshot() in case of sub-overflow why\ndon't we look into the snapshot's xip array first and see if the xid\nexists there? if not then we can look into the pg_subtrans SLRU and\nfetch the top xid and relook again into the xip array. It will be\nmore costly in cases where we do not find xid in the xip array because\nthen we will have to search this array twice but I think looking into\nthis array is much cheaper than directly accessing SLRU.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Nov 2022 09:31:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Wed, Nov 23, 2022 at 3:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Indeed. This is why I was thinking that just alerting for overflowed xact\n> isn't particularly helpful. You really want to see how much they overflow and\n> how often.\n\nI think if we just expose the is-overflowed feld and the count, people\ncan poll. It works fine for wait events and I think it's fine here,\ntoo.\n\n> But even that might not be that helpful. Perhaps what we actually need is an\n> aggregate measure showing the time spent doing subxact lookups due to\n> overflowed snapshots? Seeing a substantial amount of time spent doing subxact\n> lookups would be much more accurate call to action than seeing a that some\n> sessions have a lot of subxacts.\n\nThat's not responsive to the need that I have. I need users to be able\nto figure out which backend(s) are overflowing their snapshots -- and\nperhaps how badly and how often --- not which backends are incurring\nan expense as a result. There may well be a use case for the latter\nthing but it's a different problem.\n\n> I wonder if we could lower the impact of suboverflowed snapshots by improving\n> the representation in PGPROC and SnapshotData. What if we\n>\n> a) Recorded the min and max assigned subxid in PGPROC\n>\n> b) Instead of giving up in GetSnapshotData() once we see a suboverflowed\n> PGPROC, store the min/max subxid of the proc in SnapshotData. We could\n> reliably \"steal\" space for that from ->subxip, as we won't need to store\n> subxids for that proc.\n>\n> c) When determining visibility with a suboverflowed snapshot we use the\n> ranges from b) to check whether we need to do a subtrans lookup. I think\n> that'll often prevent subtrans lookups.\n\nWouldn't you basically need to take the union of all the ranges,\nprobably by keeping the lowest min and the highest max? I'm not sure\nhow much that would really help at that point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 11:01:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Wed, Nov 30, 2022 at 11:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> That's not responsive to the need that I have. I need users to be able\n> to figure out which backend(s) are overflowing their snapshots -- and\n> perhaps how badly and how often --- not which backends are incurring\n> an expense as a result. There may well be a use case for the latter\n> thing but it's a different problem.\n\nSo ... I want to go ahead and commit Dilip's v4 patch, or something\nvery like it. Most people were initially supportive. Tom expressed\nsome opposition, but it sounds like that was mostly to the discussion\ngoing on and on rather than the idea per se. Andres also expressed\nsome concerns, but I really think the problem he's worried about is\nsomething slightly different and need not block this work. I note also\nthat the v4 patch is designed in such a way that it does not change\nany view definitions, so the compatibility impact of committing it is\nbasically nil.\n\nAny strenuous objections?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 12 Dec 2022 11:15:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 12, 2022 at 11:15:43AM -0500, Robert Haas wrote:\n> Any strenuous objections?\n\nNope. In fact, +1. Until more work is done to alleviate the performance\nissues, this information will likely prove useful.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Dec 2022 09:33:51 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 12, 2022 at 09:33:51AM -0800, Nathan Bossart wrote:\n> On Mon, Dec 12, 2022 at 11:15:43AM -0500, Robert Haas wrote:\n> > Any strenuous objections?\n> \n> Nope. In fact, +1. Until more work is done to alleviate the performance\n> issues, this information will likely prove useful.\n\nThe docs could use a bit of attention. Otherwise +1.\n\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex 4efa1d5fca0..ac15e2ce789 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -5680,12 +5680,12 @@ FROM pg_stat_get_backend_idset() AS backendid;\n <returnvalue>record</returnvalue>\n </para>\n <para>\n- Returns a record of information about the backend's subtransactions.\n- The fields returned are <parameter>subxact_count</parameter> identifies\n- number of active subtransaction and <parameter>subxact_overflow\n- </parameter> shows whether the backend's subtransaction cache is\n- overflowed or not.\n- </para></entry>\n+ Returns a record of information about the subtransactions of the backend\n+ with the specified ID.\n+ The fields returned are <parameter>subxact_count</parameter>, which\n+ identifies the number of active subtransaction and\n+ <parameter>subxact_overflow</parameter>, which shows whether the\n+ backend's subtransaction cache is overflowed or not.\n </para></entry>\n </row>\n \n\n\n", "msg_date": "Mon, 12 Dec 2022 11:42:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 12, 2022 at 12:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index 4efa1d5fca0..ac15e2ce789 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -5680,12 +5680,12 @@ FROM pg_stat_get_backend_idset() AS backendid;\n> <returnvalue>record</returnvalue>\n> </para>\n> <para>\n> - Returns a record of information about the backend's subtransactions.\n> - The fields returned are <parameter>subxact_count</parameter> identifies\n> - number of active subtransaction and <parameter>subxact_overflow\n> - </parameter> shows whether the backend's subtransaction cache is\n> - overflowed or not.\n> - </para></entry>\n> + Returns a record of information about the subtransactions of the backend\n> + with the specified ID.\n> + The fields returned are <parameter>subxact_count</parameter>, which\n> + identifies the number of active subtransaction and\n> + <parameter>subxact_overflow</parameter>, which shows whether the\n> + backend's subtransaction cache is overflowed or not.\n> </para></entry>\n> </row>\n\nMakes sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 12 Dec 2022 12:51:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 12, 2022 at 11:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Dec 12, 2022 at 12:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > index 4efa1d5fca0..ac15e2ce789 100644\n> > --- a/doc/src/sgml/monitoring.sgml\n> > +++ b/doc/src/sgml/monitoring.sgml\n> > @@ -5680,12 +5680,12 @@ FROM pg_stat_get_backend_idset() AS backendid;\n> > <returnvalue>record</returnvalue>\n> > </para>\n> > <para>\n> > - Returns a record of information about the backend's subtransactions.\n> > - The fields returned are <parameter>subxact_count</parameter> identifies\n> > - number of active subtransaction and <parameter>subxact_overflow\n> > - </parameter> shows whether the backend's subtransaction cache is\n> > - overflowed or not.\n> > - </para></entry>\n> > + Returns a record of information about the subtransactions of the backend\n> > + with the specified ID.\n> > + The fields returned are <parameter>subxact_count</parameter>, which\n> > + identifies the number of active subtransaction and\n> > + <parameter>subxact_overflow</parameter>, which shows whether the\n> > + backend's subtransaction cache is overflowed or not.\n> > </para></entry>\n> > </row>\n>\n> Makes sense.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Dec 2022 09:39:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 13, 2022 at 5:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Dec 12, 2022 at 11:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Dec 12, 2022 at 12:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > > index 4efa1d5fca0..ac15e2ce789 100644\n> > > --- a/doc/src/sgml/monitoring.sgml\n> > > +++ b/doc/src/sgml/monitoring.sgml\n> > > @@ -5680,12 +5680,12 @@ FROM pg_stat_get_backend_idset() AS backendid;\n> > > <returnvalue>record</returnvalue>\n> > > </para>\n> > > <para>\n> > > - Returns a record of information about the backend's subtransactions.\n> > > - The fields returned are <parameter>subxact_count</parameter> identifies\n> > > - number of active subtransaction and <parameter>subxact_overflow\n> > > - </parameter> shows whether the backend's subtransaction cache is\n> > > - overflowed or not.\n> > > - </para></entry>\n> > > + Returns a record of information about the subtransactions of the backend\n> > > + with the specified ID.\n> > > + The fields returned are <parameter>subxact_count</parameter>, which\n> > > + identifies the number of active subtransaction and\n> > > + <parameter>subxact_overflow</parameter>, which shows whether the\n> > > + backend's subtransaction cache is overflowed or not.\n> > > </para></entry>\n> > > </row>\n> >\n> > Makes sense.\n>\n> +1\n\n+1\n\n\n", "msg_date": "Tue, 13 Dec 2022 08:29:03 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 13, 2022 at 2:29 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > Makes sense.\n> >\n> > +1\n>\n> +1\n\nCommitted with a bit more word-smithing on the documentation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 14:56:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 19, 2022 at 11:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Dec 13, 2022 at 2:29 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > > Makes sense.\n> > >\n> > > +1\n> >\n> > +1\n>\n> Committed with a bit more word-smithing on the documentation.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n> Hi,\nIt seems the comment for `backend_subxact_overflowed` missed a word.\n\nPlease see the patch.\n\nThanks", "msg_date": "Mon, 19 Dec 2022 12:48:05 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Mon, Dec 19, 2022 at 3:48 PM Ted Yu <yuzhihong@gmail.com> wrote:\n> It seems the comment for `backend_subxact_overflowed` missed a word.\n>\n> Please see the patch.\n\nCommitted this fix, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 16:02:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, Dec 20, 2022 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Dec 19, 2022 at 3:48 PM Ted Yu <yuzhihong@gmail.com> wrote:\n> > It seems the comment for `backend_subxact_overflowed` missed a word.\n> >\n> > Please see the patch.\n>\n> Committed this fix, thanks.\n\nThanks, Robert!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Dec 2022 09:53:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" }, { "msg_contents": "On Tue, 20 Dec 2022 at 09:23, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 20, 2022 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Dec 19, 2022 at 3:48 PM Ted Yu <yuzhihong@gmail.com> wrote:\n> > > It seems the comment for `backend_subxact_overflowed` missed a word.\n> > >\n> > > Please see the patch.\n> >\n> > Committed this fix, thanks.\n>\n> Thanks, Robert!\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n\n\nHi hackers!\n\nNice patch, seems it may be useful in cases like alerting that subxid\noverflow happened is database or whatever.\nBut I'm curious, what is the following work on this? I think it may be\nway more helpful to, for example, log queries, causing sub-tx\noverflow,\nor even kill the backend, causing sub-tx overflow with GUC variables,\nsetting server behaviour.\nFor example, in Greenplum there is gp_subtransaction_overflow\nextension and GUC for simply logging problematic queries[1]. Can we\nhave something\nsimilar in PostgreSQL on the server-side?\n\n[1] https://github.com/greenplum-db/gpdb/blob/6X_STABLE/gpcontrib/gp_subtransaction_overflow/gp_subtransaction_overflow.c#L42\n\n\n", "msg_date": "Wed, 8 Feb 2023 14:10:55 +0500", "msg_from": "Kirill Reshke <reshkekirill@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add sub-transaction overflow status in pg_stat_activity" } ]
[ { "msg_contents": "Hi,\n\nCurrently one can know the kind of on-going/last checkpoint (shutdown,\nend-of-recovery, immediate, force etc.) only via server logs that to\nwhen log_checkpoints GUC is on. At times, the users/service layer\ncomponents would want to know the kind of checkpoint (along with other\ncheckpoint related info) to take some actions and it will be a bit\ndifficult to search through the server logs. The checkpoint info can\nbe obtained from the control file (either by pg_control_checkpoint()\nor by pg_controldata tool) whereas checkpoint kind isn't available\nthere.\n\nHow about we add an extra string field to the control file alongside\nthe other checkpoint info it already has? This way, the existing\npg_control_checkpoint() or pg_controldata tool can easily be enhanced\nto show the checkpoint kind as well. One concern is that we don't want\nto increase the size of pg_controldata by more than the typical block\nsize (of 8K) to avoid any torn-writes. With this change, we might add\nat max the strings specified at [1]. Adding it to the control file has\nan advantage of preserving the last checkpoint kind which might be\nuseful.\n\nThoughts?\n\n[1] for checkpoint: \"checkpoint shutdown end-of-recovery immediate\nforce wait wal time flush-all\"\nfor restartpoint: \"restartpoint shutdown end-of-recovery immediate\nforce wait wal time flush-all\"\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 7 Dec 2021 20:06:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On 12/7/21 15:36, Bharath Rupireddy wrote:\n> Hi,\n> \n> Currently one can know the kind of on-going/last checkpoint (shutdown,\n> end-of-recovery, immediate, force etc.) only via server logs that to\n> when log_checkpoints GUC is on. At times, the users/service layer\n> components would want to know the kind of checkpoint (along with other\n> checkpoint related info) to take some actions and it will be a bit\n> difficult to search through the server logs. The checkpoint info can\n> be obtained from the control file (either by pg_control_checkpoint()\n> or by pg_controldata tool) whereas checkpoint kind isn't available\n> there.\n> \n> How about we add an extra string field to the control file alongside\n> the other checkpoint info it already has? This way, the existing\n> pg_control_checkpoint() or pg_controldata tool can easily be enhanced\n> to show the checkpoint kind as well. One concern is that we don't want\n> to increase the size of pg_controldata by more than the typical block\n> size (of 8K) to avoid any torn-writes. With this change, we might add\n> at max the strings specified at [1]. Adding it to the control file has\n> an advantage of preserving the last checkpoint kind which might be\n> useful.\n> \n> Thoughts?\n> \n\nI agree it might be useful to provide information about the nature of\nthe checkpoint, and perhaps even PID of the backend that triggered it\n(although that may be tricky, if the backend terminates).\n\nI'm not sure about adding it to control data, though. That doesn't seem\nlike a very good match for something that's mostly for monitoring.\n\nWe already have some checkpoint info in pg_stat_bgwriter, but that's\njust aggregated data, not very convenient for info about the current\ncheckpoint. So maybe we should add pg_stat_progress_checkpoint, showing\nvarious info about the current checkpoint?\n\n> [1] for checkpoint: \"checkpoint shutdown end-of-recovery immediate\n> force wait wal time flush-all\"\n> for restartpoint: \"restartpoint shutdown end-of-recovery immediate\n> force wait wal time flush-all\"\n> \n\nI'd bet squashing all of this into a single string (not really a flag)\nwill just mean people will have to parse it, etc. Keeping individual\nboolean flags (or even separate string fields) would be better, I guess.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Dec 2021 23:18:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Tue, Dec 07, 2021 at 11:18:37PM +0100, Tomas Vondra wrote:\n> On 12/7/21 15:36, Bharath Rupireddy wrote:\n> > Currently one can know the kind of on-going/last checkpoint (shutdown,\n> > end-of-recovery, immediate, force etc.) only via server logs that to\n> > when log_checkpoints GUC is on.\n\n> > The checkpoint info can be obtained from the control file (either by\n> > pg_control_checkpoint() or by pg_controldata tool) whereas checkpoint kind\n> > isn't available there.\n\n> > How about we add an extra string field to the control file alongside\n> > the other checkpoint info it already has? This way, the existing\n> > pg_control_checkpoint() or pg_controldata tool can easily be enhanced\n> > to show the checkpoint kind as well. One concern is that we don't want\n> > to increase the size of pg_controldata by more than the typical block\n> > size (of 8K) to avoid any torn-writes. With this change, we might add\n> > at max the strings specified at [1]. Adding it to the control file has\n> > an advantage of preserving the last checkpoint kind which might be\n> > useful.\n\n> > [1] for checkpoint: \"checkpoint shutdown end-of-recovery immediate\n> > force wait wal time flush-all\"\n> > for restartpoint: \"restartpoint shutdown end-of-recovery immediate\n> > force wait wal time flush-all\"\n> \n> I'd bet squashing all of this into a single string (not really a flag)\n> will just mean people will have to parse it, etc. Keeping individual\n> boolean flags (or even separate string fields) would be better, I guess.\n\nSince the size of controldata is a concern, I suggest to add an int16 to\npopulate with flags, which pg_controldata can parse for display.\n\nNote that this other patch intends to add the timestamp and LSN of most recent\nrecovery to controldata.\nhttps://commitfest.postgresql.org/36/3183/\n\n> We already have some checkpoint info in pg_stat_bgwriter, but that's\n> just aggregated data, not very convenient for info about the current\n> checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n> various info about the current checkpoint?\n\nIt could go into the pg_stat_checkpointer view, which would be the culmination\nof another patch (cc Melanie).\nhttps://commitfest.postgresql.org/36/3272/\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:37:04 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 3:48 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/7/21 15:36, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > Currently one can know the kind of on-going/last checkpoint (shutdown,\n> > end-of-recovery, immediate, force etc.) only via server logs that to\n> > when log_checkpoints GUC is on. At times, the users/service layer\n> > components would want to know the kind of checkpoint (along with other\n> > checkpoint related info) to take some actions and it will be a bit\n> > difficult to search through the server logs. The checkpoint info can\n> > be obtained from the control file (either by pg_control_checkpoint()\n> > or by pg_controldata tool) whereas checkpoint kind isn't available\n> > there.\n> >\n> > How about we add an extra string field to the control file alongside\n> > the other checkpoint info it already has? This way, the existing\n> > pg_control_checkpoint() or pg_controldata tool can easily be enhanced\n> > to show the checkpoint kind as well. One concern is that we don't want\n> > to increase the size of pg_controldata by more than the typical block\n> > size (of 8K) to avoid any torn-writes. With this change, we might add\n> > at max the strings specified at [1]. Adding it to the control file has\n> > an advantage of preserving the last checkpoint kind which might be\n> > useful.\n> >\n> > Thoughts?\n> >\n>\n> I agree it might be useful to provide information about the nature of\n> the checkpoint, and perhaps even PID of the backend that triggered it\n> (although that may be tricky, if the backend terminates).\n\nThanks. I agree to have pg_stat_progress_checkpoint and yes PID of the\ntriggered backend can possibly go there (we can mention in the\ndocumentation that the backend that triggered the checkpoint can get\nterminated).\n\n> I'm not sure about adding it to control data, though. That doesn't seem\n> like a very good match for something that's mostly for monitoring.\n\nHaving it in the control data file (along with the existing checkpoint\ninformation) will be helpful to know what was the last checkpoint\ninformation and we can use the existing pg_control_checkpoint function\nor the tool to emit that info. I plan to add an int16 flag as\nsuggested by Justin in this thread and come up with a patch.\n\n> We already have some checkpoint info in pg_stat_bgwriter, but that's\n> just aggregated data, not very convenient for info about the current\n> checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n> various info about the current checkpoint?\n\n+1 to have pg_stat_progress_checkpoint view to know what's going on\nwith the current checkpoint.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 07:24:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 4:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'd bet squashing all of this into a single string (not really a flag)\n> > will just mean people will have to parse it, etc. Keeping individual\n> > boolean flags (or even separate string fields) would be better, I guess.\n>\n> Since the size of controldata is a concern, I suggest to add an int16 to\n> populate with flags, which pg_controldata can parse for display.\n\n+1. I will come up with a patch soon.\n\n> Note that this other patch intends to add the timestamp and LSN of most recent\n> recovery to controldata.\n> https://commitfest.postgresql.org/36/3183/\n\nThanks. I will go through it separately.\n\n> > We already have some checkpoint info in pg_stat_bgwriter, but that's\n> > just aggregated data, not very convenient for info about the current\n> > checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n> > various info about the current checkpoint?\n>\n> It could go into the pg_stat_checkpointer view, which would be the culmination\n> of another patch (cc Melanie).\n> https://commitfest.postgresql.org/36/3272/\n\n+1 to have pg_stat_progress_checkpoint view. We have\nCheckpointStatsData already. What we need is to make this structure\nshared and add a little more info to represent the progress, so that\nthe other backends can access it. I think we can discuss this in a\nseparate thread to give it a fresh try rather than proposing this as a\npart of another thread. I will spend some time on\npg_stat_progress_checkpoint proposal and try to come up with a\nseparate thread to discuss.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 07:24:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "\n\nOn 12/8/21 02:54, Bharath Rupireddy wrote:\n> On Wed, Dec 8, 2021 at 3:48 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/7/21 15:36, Bharath Rupireddy wrote:\n>>> Hi,\n>>>\n>>> Currently one can know the kind of on-going/last checkpoint (shutdown,\n>>> end-of-recovery, immediate, force etc.) only via server logs that to\n>>> when log_checkpoints GUC is on. At times, the users/service layer\n>>> components would want to know the kind of checkpoint (along with other\n>>> checkpoint related info) to take some actions and it will be a bit\n>>> difficult to search through the server logs. The checkpoint info can\n>>> be obtained from the control file (either by pg_control_checkpoint()\n>>> or by pg_controldata tool) whereas checkpoint kind isn't available\n>>> there.\n>>>\n>>> How about we add an extra string field to the control file alongside\n>>> the other checkpoint info it already has? This way, the existing\n>>> pg_control_checkpoint() or pg_controldata tool can easily be enhanced\n>>> to show the checkpoint kind as well. One concern is that we don't want\n>>> to increase the size of pg_controldata by more than the typical block\n>>> size (of 8K) to avoid any torn-writes. With this change, we might add\n>>> at max the strings specified at [1]. Adding it to the control file has\n>>> an advantage of preserving the last checkpoint kind which might be\n>>> useful.\n>>>\n>>> Thoughts?\n>>>\n>>\n>> I agree it might be useful to provide information about the nature of\n>> the checkpoint, and perhaps even PID of the backend that triggered it\n>> (although that may be tricky, if the backend terminates).\n> \n> Thanks. I agree to have pg_stat_progress_checkpoint and yes PID of the\n> triggered backend can possibly go there (we can mention in the\n> documentation that the backend that triggered the checkpoint can get\n> terminated).\n> \n\nMy concern is someone might run something that requires a checkpoint, so\nwe start it and put the PID into the catalog. And then the person aborts\nthe command and starts doing something else. But that does not abort the\ncheckpoint, but the backend now runs something that doesn't requite\ncheckpoint, which is rather confusing.\n\n>> I'm not sure about adding it to control data, though. That doesn't seem\n>> like a very good match for something that's mostly for monitoring.\n> \n> Having it in the control data file (along with the existing checkpoint\n> information) will be helpful to know what was the last checkpoint\n> information and we can use the existing pg_control_checkpoint function\n> or the tool to emit that info. I plan to add an int16 flag as\n> suggested by Justin in this thread and come up with a patch.\n> \n\nOK, although I'm not sure it's all that useful (if we have that in some\nsort of system view).\n\n>> We already have some checkpoint info in pg_stat_bgwriter, but that's\n>> just aggregated data, not very convenient for info about the current\n>> checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n>> various info about the current checkpoint?\n> \n> +1 to have pg_stat_progress_checkpoint view to know what's going on\n> with the current checkpoint.\n> \n\nDo you plan to add it to this patch, or should it be a separate patch?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Dec 2021 03:04:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 7:34 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> >> I agree it might be useful to provide information about the nature of\n> >> the checkpoint, and perhaps even PID of the backend that triggered it\n> >> (although that may be tricky, if the backend terminates).\n> >\n> > Thanks. I agree to have pg_stat_progress_checkpoint and yes PID of the\n> > triggered backend can possibly go there (we can mention in the\n> > documentation that the backend that triggered the checkpoint can get\n> > terminated).\n> >\n>\n> My concern is someone might run something that requires a checkpoint, so\n> we start it and put the PID into the catalog. And then the person aborts\n> the command and starts doing something else. But that does not abort the\n> checkpoint, but the backend now runs something that doesn't requite\n> checkpoint, which is rather confusing.\n>\n> >> I'm not sure about adding it to control data, though. That doesn't seem\n> >> like a very good match for something that's mostly for monitoring.\n> >\n> > Having it in the control data file (along with the existing checkpoint\n> > information) will be helpful to know what was the last checkpoint\n> > information and we can use the existing pg_control_checkpoint function\n> > or the tool to emit that info. I plan to add an int16 flag as\n> > suggested by Justin in this thread and come up with a patch.\n> >\n>\n> OK, although I'm not sure it's all that useful (if we have that in some\n> sort of system view).\n\nIf the server is down, the control file will help. Since we already\nhave the other checkpoint info in the control file, it's much more\nuseful and sensible to have this extra piece of missing information\n(checkpoint kind) there.\n\n> >> We already have some checkpoint info in pg_stat_bgwriter, but that's\n> >> just aggregated data, not very convenient for info about the current\n> >> checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n> >> various info about the current checkpoint?\n> >\n> > +1 to have pg_stat_progress_checkpoint view to know what's going on\n> > with the current checkpoint.\n> >\n>\n> Do you plan to add it to this patch, or should it be a separate patch?\n\nNo, I will put some more thoughts around it and start a new thread.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 07:43:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "\n\nOn 12/8/21 02:54, Bharath Rupireddy wrote:\n> On Wed, Dec 8, 2021 at 4:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> I'd bet squashing all of this into a single string (not really a flag)\n>>> will just mean people will have to parse it, etc. Keeping individual\n>>> boolean flags (or even separate string fields) would be better, I guess.\n>>\n>> Since the size of controldata is a concern, I suggest to add an int16 to\n>> populate with flags, which pg_controldata can parse for display.\n> \n> +1. I will come up with a patch soon.\n> \n>> Note that this other patch intends to add the timestamp and LSN of most recent\n>> recovery to controldata.\n>> https://commitfest.postgresql.org/36/3183/\n> \n> Thanks. I will go through it separately.\n> \n>>> We already have some checkpoint info in pg_stat_bgwriter, but that's\n>>> just aggregated data, not very convenient for info about the current\n>>> checkpoint. So maybe we should add pg_stat_progress_checkpoint, showing\n>>> various info about the current checkpoint?\n>>\n>> It could go into the pg_stat_checkpointer view, which would be the culmination\n>> of another patch (cc Melanie).\n>> https://commitfest.postgresql.org/36/3272/\n> \n\nI don't think the pg_stat_checkpointer would be a good match - that's\ngoing to be an aggregated view of all past checkpoints, not a good\nsource info about the currently running one.\n\n> +1 to have pg_stat_progress_checkpoint view. We have\n> CheckpointStatsData already. What we need is to make this structure\n> shared and add a little more info to represent the progress, so that\n> the other backends can access it. I think we can discuss this in a\n> separate thread to give it a fresh try rather than proposing this as a\n> part of another thread. I will spend some time on\n> pg_stat_progress_checkpoint proposal and try to come up with a\n> separate thread to discuss.\n> \n\n+1 to discuss it as part of this patch\n\nI'm not sure whether the view should look at CheckpointStatsData, or do\nthe same thing as the other pg_stat_progress_* views - send the data to\nstat collector, and read it from there.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Dec 2021 03:22:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 7:43 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 7:34 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > >> I agree it might be useful to provide information about the nature of\n> > >> the checkpoint, and perhaps even PID of the backend that triggered it\n> > >> (although that may be tricky, if the backend terminates).\n> >\n> > >> I'm not sure about adding it to control data, though. That doesn't seem\n> > >> like a very good match for something that's mostly for monitoring.\n> > >\n> > > Having it in the control data file (along with the existing checkpoint\n> > > information) will be helpful to know what was the last checkpoint\n> > > information and we can use the existing pg_control_checkpoint function\n> > > or the tool to emit that info. I plan to add an int16 flag as\n> > > suggested by Justin in this thread and come up with a patch.\n> > >\n> > OK, although I'm not sure it's all that useful (if we have that in some\n> > sort of system view).\n>\n> If the server is down, the control file will help. Since we already\n> have the other checkpoint info in the control file, it's much more\n> useful and sensible to have this extra piece of missing information\n> (checkpoint kind) there.\n\nHere's the patch that adds the last checkpoint kind to the control\nfile, displayed as an output in the pg_controldata tool and also\nexposes it via the pg_control_checkpoint function.\n\nI will analyze and work on the idea of pg_stat_progress_checkpoint and\ntry to post the patch here in this thread.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 9 Dec 2021 12:08:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 12:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 7:43 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 7:34 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> > > >> I agree it might be useful to provide information about the nature of\n> > > >> the checkpoint, and perhaps even PID of the backend that triggered it\n> > > >> (although that may be tricky, if the backend terminates).\n> > >\n> > > >> I'm not sure about adding it to control data, though. That doesn't seem\n> > > >> like a very good match for something that's mostly for monitoring.\n> > > >\n> > > > Having it in the control data file (along with the existing checkpoint\n> > > > information) will be helpful to know what was the last checkpoint\n> > > > information and we can use the existing pg_control_checkpoint function\n> > > > or the tool to emit that info. I plan to add an int16 flag as\n> > > > suggested by Justin in this thread and come up with a patch.\n> > > >\n> > > OK, although I'm not sure it's all that useful (if we have that in some\n> > > sort of system view).\n> >\n> > If the server is down, the control file will help. Since we already\n> > have the other checkpoint info in the control file, it's much more\n> > useful and sensible to have this extra piece of missing information\n> > (checkpoint kind) there.\n>\n> Here's the patch that adds the last checkpoint kind to the control\n> file, displayed as an output in the pg_controldata tool and also\n> exposes it via the pg_control_checkpoint function.\n\nHere's v2, rebased onto the latest master.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 27 Dec 2021 19:42:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi all,\n\n> Here's v2, rebased onto the latest master.\n\nI've reviewed this patch. The patch builds against the master (commit\ne9d4001ec592bcc9a3332547cb1b0211e8794f38) and passes all the tests.\nThe patch does what it intends to do, namely store the kind of the\nlast checkpoint in the control file and display it in the output of\nthe pg_control_checkpoint() function and pg_controldata utility.\nI did not test it with restartpoints though. Speaking of the torn\nwrites, the size of the control file with this patch applied does not\nexceed 8Kb.\n\nA few code comments:\n\n+ char ckpt_kind[2 * MAXPGPATH];\n\nI don't completely understand why 2 * MAXPGPATH is used here for the\nbuffer size.\n[1] defines MAXPGPATH to be standard size of a pathname buffer. How\ndoes it relate to ckpt_kind ?\n\n+ ControlFile->checkPointKind = 0;\n\nIt is worth a comment that 0 is unknown, as for instance in [2]\n\n+ (flags == 0) ? \"unknown\" : \"\",\n\nThat reads as if this patch would introduce a new \"unknown\" checkpoint state.\nWhy have it here at all if after for example initdb the kind is \"shutdown\" ?\n\n\n+ snprintf(ckpt_kind, 2 * MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n+ (flags == 0) ? \"unknown\" : \"\",\n+ (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n+ (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n+ (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n+ (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n+ (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n+ (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n+ (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n+ (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n\nThe space at the strings' end (as in \"wait \" or \"immediate \")\nintroduces extra whitespace in the output of pg_control_checkpoint().\nA similar check at [3] places whitespace differently; that arrangement\nof whitespace should remove the issue.\n\n[1] https://github.com/postgres/postgres/blob/410aa248e5a883fde4832999cc9b23c7ace0f2ff/src/include/pg_config_manual.h#L106\n[2] https://github.com/postgres/postgres/blob/410aa248e5a883fde4832999cc9b23c7ace0f2ff/src/interfaces/libpq/fe-exec.c#L1078\n[3] https://github.com/postgres/postgres/blob/27b77ecf9f4d5be211900eda54d8155ada50d696/src/backend/access/transam/xlog.c#L8851-L8859\n\nRegards,\nSergey Dudoladov\n\n\n", "msg_date": "Thu, 27 Jan 2022 10:53:32 +0100", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 27, 2022 at 10:53:32AM +0100, Sergey Dudoladov wrote:\n> Hi all,\n> \n> > Here's v2, rebased onto the latest master.\n> \n> I've reviewed this patch. The patch builds against the master (commit\n> e9d4001ec592bcc9a3332547cb1b0211e8794f38) and passes all the tests.\n> The patch does what it intends to do, namely store the kind of the\n> last checkpoint in the control file and display it in the output of\n> the pg_control_checkpoint() function and pg_controldata utility.\n\nI don't agree with that.\n\nWhat it's showing is the \"currently ongoing checkpoint or last completed\ncheckpoint\" kind. It's still not possible to know if a checkpoint is in\nprogress or not and any kind of information related to it, so I'm not sure how\nuseful this will be compared to a checkpoint progress view.\n\nAlso, it's only showing the initial triggering conditions of checkpoints.\nFor instance, if a timed checkpoint is started and then a backend executes a\n\"CHECKPOINT;\", it will upgrade the ongoing checkpoint with additional flags but\nAFAICS those new flags won't be saved to the control file.\n\n\n", "msg_date": "Thu, 27 Jan 2022 18:56:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 06:56:57PM +0800, Julien Rouhaud wrote:\n> \n> What it's showing is the \"currently ongoing checkpoint or last completed\n> checkpoint\" kind.\n\nAh after double checking I see it's storing the information *after* the\ncheckpoint completion, so it's indeed the last completed checkpoint. I'm not\nsure how useful it can be, but ok.\n\n> Also, it's only showing the initial triggering conditions of checkpoints.\n> For instance, if a timed checkpoint is started and then a backend executes a\n> \"CHECKPOINT;\", it will upgrade the ongoing checkpoint with additional flags but\n> AFAICS those new flags won't be saved to the control file.\n\nThis one is still valid I think, it's only storing the initial flags and not\nthe possibly upgraded one in shmem.\n\n\n", "msg_date": "Thu, 27 Jan 2022 19:09:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "At Thu, 27 Jan 2022 19:09:29 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, Jan 27, 2022 at 06:56:57PM +0800, Julien Rouhaud wrote:\n> > \n> > What it's showing is the \"currently ongoing checkpoint or last completed\n> > checkpoint\" kind.\n> \n> Ah after double checking I see it's storing the information *after* the\n> checkpoint completion, so it's indeed the last completed checkpoint. I'm not\n> sure how useful it can be, but ok.\n\nI don't see it useful (but don't oppose) to record checkpoint kind in\ncontrol file. It is a kind of realtime noncritical info and in the\nfirst place retrievable from server log if needed. And I'm skeptical\nthat it is needed such frequently. Checkpoint kind is useful to check\nmax_wal_size's appropriateness if it is in a summarized form as in\npg_stat_bgwriter. On the other hand showing the same in a stats view\nor the output of pg_control_checkpoint() is fine by me.\n\n> > Also, it's only showing the initial triggering conditions of checkpoints.\n> > For instance, if a timed checkpoint is started and then a backend executes a\n> > \"CHECKPOINT;\", it will upgrade the ongoing checkpoint with additional flags but\n> > AFAICS those new flags won't be saved to the control file.\n> \n> This one is still valid I think, it's only storing the initial flags and not\n> the possibly upgraded one in shmem.\n\nAgreed.\n\nI don't like to add this complex-but-need-in-sync blob twice. If we\nneed to do that twice, I want them consolidated in any shape.\n\n>\tDatum\t\tvalues[18];\n>\tbool\t\tnulls[18];\n\nYou forgot to expand these arrays.\n\nThis breaks checkpoint file format. Need to bump PG_CONTROL_VERSION,\nand pg_upgrade need to treat the change.\n\nEven if we add checkpoint kind to control file, it would look a bit\nstrange that the \"checkpoint kind\" shows first among all\ncheckpoint-related lines. And at least the \"wait\" in the line is\nreally useless since it is not a property of a checkpoint. Instead, it\ndoesn't show \"requested\" which is one of the checkpoint properties\nlike \"xlog\" and \"time\". I'm not sure we need all of the properties to\nbe shown but I don't have a clear criteria for each property of it\nought to be shown or not.\n\n> pg_control last modified: Fri 28 Jan 2022 09:49:46 AM JST\n> Latest checkpoint kind: immediate force wait \n> Latest checkpoint location: 0/172B2C8\n\nI'd like to see the PID of the triggering process, but it is really\nnot a information suitable in the control file...\n\n\n- proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',\n+ proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz,text}',\n\nI think the additional column should be text[] instead of text, but\nnot sure.\n\nAnd you need to edit the corresponding part of the doc.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:38:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Sorry, the last message lacks one citation.\n\nAt Thu, 27 Jan 2022 19:09:29 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, Jan 27, 2022 at 06:56:57PM +0800, Julien Rouhaud wrote:\n> > \n> > What it's showing is the \"currently ongoing checkpoint or last completed\n> > checkpoint\" kind.\n> \n> Ah after double checking I see it's storing the information *after* the\n> checkpoint completion, so it's indeed the last completed checkpoint. I'm not\n> sure how useful it can be, but ok.\n\nI don't see it useful (but don't oppose) to record checkpoint kind in\ncontrol file. It is a kind of realtime noncritical info and in the\nfirst place retrievable from server log if needed. And I'm skeptical\nthat it is needed such frequently. Checkpoint kind is useful to check\nmax_wal_size's appropriateness if it is in a summarized form as in\npg_stat_bgwriter. On the other hand showing the same in a stats view\nor the output of pg_control_checkpoint() is fine by me.\n\n> > Also, it's only showing the initial triggering conditions of checkpoints.\n> > For instance, if a timed checkpoint is started and then a backend executes a\n> > \"CHECKPOINT;\", it will upgrade the ongoing checkpoint with additional flags but\n> > AFAICS those new flags won't be saved to the control file.\n> \n> This one is still valid I think, it's only storing the initial flags and not\n> the possibly upgraded one in shmem.\n\nAgreed.\n\n+\tsnprintf(ckpt_kind, 2 * MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n+\t\t\t\t (flags == 0) ? \"unknown\" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n\n\nI don't like to add this complex-but-need-in-sync blob twice. If we\nneed to do that twice, I want them consolidated in any shape.\n\n+\tDatum\t\tvalues[18];\n+\tbool\t\tnulls[18];\n\nYou forgot to expand these arrays.\n\nThis breaks checkpoint file format. Need to bump PG_CONTROL_VERSION,\nand pg_upgrade need to treat the change.\n\nEven if we add checkpoint kind to control file, it would look a bit\nstrange that the \"checkpoint kind\" shows first among all\ncheckpoint-related lines. And at least the \"wait\" in the line is\nreally useless since it is not a property of a checkpoint. Instead, it\ndoesn't show \"requested\" which is one of the checkpoint properties\nlike \"xlog\" and \"time\". I'm not sure we need all of the properties to\nbe shown but I don't have a clear criteria for each property of it\nought to be shown or not.\n\n> pg_control last modified: Fri 28 Jan 2022 09:49:46 AM JST\n> Latest checkpoint kind: immediate force wait \n> Latest checkpoint location: 0/172B2C8\n\nI'd like to see the PID of the triggering process, but it is really\nnot a information suitable in the control file...\n\n\n- proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',\n+ proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz,text}',\n\nI think the additional column should be text[] instead of text, but\nnot sure.\n\nAnd you need to edit the corresponding part of the doc.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:41:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 10:38:53AM +0900, Kyotaro Horiguchi wrote:\n> \n> I'd like to see the PID of the triggering process, but it is really\n> not a information suitable in the control file...\n\nYes that's something I would like too. But even if the PIDs could be store, I\ndon't think that having the information for an already completed checkpoint\nwould be of any use at all.\n\nFor the current checkpoint, it should also be an array of PID. For instance if\nthe checkpointer started a throttled checkpoint, then someone calls a non\nimmediate pg_start_backup() and finally thinks it's too slow and need a fast\ncheckpoint. This would be welcome in a new pg_stat_progress_checkpoint view.\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:00:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "At Fri, 28 Jan 2022 10:41:28 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Sorry, the last message lacks one citation.\n> \n> At Thu, 27 Jan 2022 19:09:29 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > On Thu, Jan 27, 2022 at 06:56:57PM +0800, Julien Rouhaud wrote:\n> > > \n> > > What it's showing is the \"currently ongoing checkpoint or last completed\n> > > checkpoint\" kind.\n> > \n> > Ah after double checking I see it's storing the information *after* the\n> > checkpoint completion, so it's indeed the last completed checkpoint. I'm not\n> > sure how useful it can be, but ok.\n> \n> I don't see it useful (but don't oppose) to record checkpoint kind in\n> control file. It is a kind of realtime noncritical info and in the\n> first place retrievable from server log if needed. And I'm skeptical\n> that it is needed such frequently. Checkpoint kind is useful to check\n> max_wal_size's appropriateness if it is in a summarized form as in\n> pg_stat_bgwriter. On the other hand showing the same in a stats view\n> or the output of pg_control_checkpoint() is fine by me.\n> \n> > > Also, it's only showing the initial triggering conditions of checkpoints.\n> > > For instance, if a timed checkpoint is started and then a backend executes a\n> > > \"CHECKPOINT;\", it will upgrade the ongoing checkpoint with additional flags but\n> > > AFAICS those new flags won't be saved to the control file.\n> > \n> > This one is still valid I think, it's only storing the initial flags and not\n> > the possibly upgraded one in shmem.\n> \n> Agreed.\n> \n> +\tsnprintf(ckpt_kind, 2 * MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> +\t\t\t\t (flags == 0) ? \"unknown\" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n> \n> \n> I don't like to add this complex-but-need-in-sync blob twice. If we\n> need to do that twice, I want them consolidated in any shape.\n> \n> +\tDatum\t\tvalues[18];\n> +\tbool\t\tnulls[18];\n> \n> You forgot to expand these arrays.\n> \n> This breaks checkpoint file format. Need to bump PG_CONTROL_VERSION,\n> and pg_upgrade need to treat the change.\n> \n> Even if we add checkpoint kind to control file, it would look a bit\n> strange that the \"checkpoint kind\" shows first among all\n> checkpoint-related lines. And at least the \"wait\" in the line is\n> really useless since it is not a property of a checkpoint. Instead, it\n> doesn't show \"requested\" which is one of the checkpoint properties\n> like \"xlog\" and \"time\". I'm not sure we need all of the properties to\n> be shown but I don't have a clear criteria for each property of it\n> ought to be shown or not.\n> \n> > pg_control last modified: Fri 28 Jan 2022 09:49:46 AM JST\n> > Latest checkpoint kind: immediate force wait \n> > Latest checkpoint location: 0/172B2C8\n> \n> I'd like to see the PID of the triggering process, but it is really\n> not a information suitable in the control file...\n> \n> \n> - proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',\n> + proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz,text}',\n> \n> I think the additional column should be text[] instead of text, but\n> not sure.\n> \n> And you need to edit the corresponding part of the doc.\n\nI have an additional comment.\n\n+\tchar \t\tckpt_kind[2 * MAXPGPATH];\n..\n+\tsnprintf(ckpt_kind, 2 * MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n+\t\t\t\t (flags == 0) ? \"unknown\" : \"\",\n+\t\t\t\t (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n\n\nThe value \"2 * MAXPGPATH\" is utterly nonsense about bouth \"2\" and\n\"MAXPGPATH\", and the product of them is apparently too large.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:10:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "At Fri, 28 Jan 2022 10:00:46 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hi,\n> \n> On Fri, Jan 28, 2022 at 10:38:53AM +0900, Kyotaro Horiguchi wrote:\n> > \n> > I'd like to see the PID of the triggering process, but it is really\n> > not a information suitable in the control file...\n> \n> Yes that's something I would like too. But even if the PIDs could be store, I\n> don't think that having the information for an already completed checkpoint\n> would be of any use at all.\n> \n> For the current checkpoint, it should also be an array of PID. For instance if\n> the checkpointer started a throttled checkpoint, then someone calls a non\n> immediate pg_start_backup() and finally thinks it's too slow and need a fast\n> checkpoint. This would be welcome in a new pg_stat_progress_checkpoint view.\n\nYeah, I thought of the same issue. But at the time of the previous\nmail I thought it would be enough that the PID is of the first\ninitiator of the current running checkpoint likewise the checkpoint\nkind. So it is important to think about the use case. If we put\nsignificancy on actually happend checkpoints, we don't need absorbed\ncheckpoint requests to be recorded but if we put significance on\ncheckpoint requests, we need to care of every checkpoint request.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:35:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 7:30 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Jan 28, 2022 at 10:38:53AM +0900, Kyotaro Horiguchi wrote:\n> >\n> > I'd like to see the PID of the triggering process, but it is really\n> > not a information suitable in the control file...\n>\n> Yes that's something I would like too. But even if the PIDs could be store, I\n> don't think that having the information for an already completed checkpoint\n> would be of any use at all.\n>\n> For the current checkpoint, it should also be an array of PID. For instance if\n> the checkpointer started a throttled checkpoint, then someone calls a non\n> immediate pg_start_backup() and finally thinks it's too slow and need a fast\n> checkpoint. This would be welcome in a new pg_stat_progress_checkpoint view.\n\nThanks all for the comments. pg_stat_progress_checkpoint is being\ndiscussed in another thread [1].\n\nI will respond to the other comments soon.\n\n[1] https://www.postgresql.org/message-id/CALj2ACV-F%2BK%2Bz%2BXW8fnK4MV71qz2gzAMxFnYziRgZURMB5ycAQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:17:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 08:17:39AM +0530, Bharath Rupireddy wrote:\n> \n> Thanks all for the comments. pg_stat_progress_checkpoint is being\n> discussed in another thread [1].\n> \n> [1] https://www.postgresql.org/message-id/CALj2ACV-F%2BK%2Bz%2BXW8fnK4MV71qz2gzAMxFnYziRgZURMB5ycAQ%40mail.gmail.com\n\nThis thread was explicitly talking about log reporting and it's not clear to me\nthat it's now supposed to be about a new pg_stat_progress_checkpoint()\nfunction. I think you should start a new thread or at least change the subject\nto be clearer.\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:24:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 8:54 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Jan 28, 2022 at 08:17:39AM +0530, Bharath Rupireddy wrote:\n> >\n> > Thanks all for the comments. pg_stat_progress_checkpoint is being\n> > discussed in another thread [1].\n> >\n> > [1] https://www.postgresql.org/message-id/CALj2ACV-F%2BK%2Bz%2BXW8fnK4MV71qz2gzAMxFnYziRgZURMB5ycAQ%40mail.gmail.com\n>\n> This thread was explicitly talking about log reporting and it's not clear to me\n> that it's now supposed to be about a new pg_stat_progress_checkpoint()\n> function. I think you should start a new thread or at least change the subject\n> to be clearer.\n\n+1 to change the subject of that thread to 'report checkpoint progress\nwith pg_stat_progress_checkpoint'.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:58:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 3:23 PM Sergey Dudoladov\n<sergey.dudoladov@gmail.com> wrote:\n> > Here's v2, rebased onto the latest master.\n>\n> I've reviewed this patch. The patch builds against the master (commit\n> e9d4001ec592bcc9a3332547cb1b0211e8794f38) and passes all the tests.\n> The patch does what it intends to do, namely store the kind of the\n> last checkpoint in the control file and display it in the output of\n> the pg_control_checkpoint() function and pg_controldata utility.\n> I did not test it with restartpoints though. Speaking of the torn\n> writes, the size of the control file with this patch applied does not\n> exceed 8Kb.\n\nThanks for the review.\n\n> A few code comments:\n>\n> + char ckpt_kind[2 * MAXPGPATH];\n>\n> I don't completely understand why 2 * MAXPGPATH is used here for the\n> buffer size.\n> [1] defines MAXPGPATH to be standard size of a pathname buffer. How\n> does it relate to ckpt_kind ?\n\nI was using it loosely. Changed in the v3 patch.\n\n> + ControlFile->checkPointKind = 0;\n>\n> It is worth a comment that 0 is unknown, as for instance in [2]\n\nWe don't even need ControlFile->checkPointKind = 0; because\nInitControlFile will memset(ControlFile, 0, sizeof(ControlFileData));,\nhence removed this.\n\n> + (flags == 0) ? \"unknown\" : \"\",\n>\n> That reads as if this patch would introduce a new \"unknown\" checkpoint state.\n> Why have it here at all if after for example initdb the kind is \"shutdown\" ?\n\nYeah, even LogCheckpointStart doesn't have anything \"unknown\" so removed it.\n\n> The space at the strings' end (as in \"wait \" or \"immediate \")\n> introduces extra whitespace in the output of pg_control_checkpoint().\n> A similar check at [3] places whitespace differently; that arrangement\n> of whitespace should remove the issue.\n\nChanged.\n\n> > Datum values[18];\n> > bool nulls[18];\n>\n> You forgot to expand these arrays.\n\nNot sure what you meant here. The size of the array is already 19 in v2.\n\n> This breaks checkpoint file format. Need to bump PG_CONTROL_VERSION,\n> and pg_upgrade need to treat the change.\n\nI added a note in the commit message to bump cat version so that the\ncommitter will take care of it.\n\n> - proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',\n> + proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz,text}',\n>\n> I think the additional column should be text[] instead of text, but\n> not sure.\n\nWe are preparing a single string of all the checkpoint kinds and\noutputting as a text column, so we don't need text[].\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 13:49:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 01:49:19PM +0530, Bharath Rupireddy wrote:\n> On Thu, Jan 27, 2022 at 3:23 PM Sergey Dudoladov\n> <sergey.dudoladov@gmail.com> wrote:\n> > > Here's v2, rebased onto the latest master.\n> >\n> > I've reviewed this patch. The patch builds against the master (commit\n> > e9d4001ec592bcc9a3332547cb1b0211e8794f38) and passes all the tests.\n> > The patch does what it intends to do, namely store the kind of the\n> > last checkpoint in the control file and display it in the output of\n> > the pg_control_checkpoint() function and pg_controldata utility.\n> > I did not test it with restartpoints though. Speaking of the torn\n> > writes, the size of the control file with this patch applied does not\n> > exceed 8Kb.\n> \n> Thanks for the review.\n> \n> > A few code comments:\n> >\n> > + char ckpt_kind[2 * MAXPGPATH];\n> >\n> > I don't completely understand why 2 * MAXPGPATH is used here for the\n> > buffer size.\n> > [1] defines MAXPGPATH to be standard size of a pathname buffer. How\n> > does it relate to ckpt_kind ?\n> \n> I was using it loosely. Changed in the v3 patch.\n> \n> > + ControlFile->checkPointKind = 0;\n> >\n> > It is worth a comment that 0 is unknown, as for instance in [2]\n> \n> We don't even need ControlFile->checkPointKind = 0; because\n> InitControlFile will memset(ControlFile, 0, sizeof(ControlFileData));,\n> hence removed this.\n> \n> > + (flags == 0) ? \"unknown\" : \"\",\n> >\n> > That reads as if this patch would introduce a new \"unknown\" checkpoint state.\n> > Why have it here at all if after for example initdb the kind is \"shutdown\" ?\n> \n> Yeah, even LogCheckpointStart doesn't have anything \"unknown\" so removed it.\n> \n> > The space at the strings' end (as in \"wait \" or \"immediate \")\n> > introduces extra whitespace in the output of pg_control_checkpoint().\n> > A similar check at [3] places whitespace differently; that arrangement\n> > of whitespace should remove the issue.\n> \n> Changed.\n> \n> > > Datum values[18];\n> > > bool nulls[18];\n> >\n> > You forgot to expand these arrays.\n> \n> Not sure what you meant here. The size of the array is already 19 in v2.\n> \n> > This breaks checkpoint file format. Need to bump PG_CONTROL_VERSION,\n> > and pg_upgrade need to treat the change.\n> \n> I added a note in the commit message to bump cat version so that the\n> committer will take care of it.\n\nPG_CONTROL_VERSION is different from catversion. You should update it in this\npatch.\n\nBut Horiguchi-san was also mentioning that pg_upgrade/controldata.c needs some\nmodifications if you change the format (thus the requirement to bump\nPG_CONTROL_VERSION).\n\nWhy are you defining CHECKPOINT_KIND_TEXT_LENGTH twice? You\nshould just define it in some sensible header used by both files, or better\nhave a new function to take care of that rather than having the code\nduplicated.\n\nAlso, you still didn't fix the possible flag upgrade issue.\n\n\n", "msg_date": "Fri, 28 Jan 2022 16:50:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 2:20 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> PG_CONTROL_VERSION is different from catversion. You should update it in this\n> patch.\n\nMy bad. Updated it.\n\n> But Horiguchi-san was also mentioning that pg_upgrade/controldata.c needs some\n> modifications if you change the format (thus the requirement to bump\n> PG_CONTROL_VERSION).\n\n> Also, you still didn't fix the possible flag upgrade issue.\n\nI don't think we need to change pg_upgrade's ControlData controldata;\nstructure as the information may not be needed there and the while\nloop there specifically parses/searches for the required\npg_controldata output texts. Am I missing something here?\n\n> Why are you defining CHECKPOINT_KIND_TEXT_LENGTH twice? You\n> should just define it in some sensible header used by both files, or better\n> have a new function to take care of that rather than having the code\n> duplicated.\n\nYeah, added the macro in pg_control.h. I also wanted to have a common\nfunction to get checkpoint kind text and place it in\ncontroldata_utils.c, but it doesn't have xlog.h included, so no\ncheckpoint flags there, hence I refrained from the common function\nidea.\n\nI think we don't need to print the checkpoint kind in pg_resetwal.c's\nPrintControlValues because the pg_resetwal changes the checkpoint and\nPrintControlValues just prints the fields that will not be\nreset/changed by pg_resetwal. Am I missing something here?\n\nAttaching v4.\n\nNot related to this patch: by looking at the way the fields (like\n\"Latest checkpoint's TimeLineID:\", \"Latest checkpoint's NextOID:\"\netc.) of pg_controldata output are being used in pg_resetwal.c,\npg_controldata.c, and pg_upgrade/controldata.c, I'm thinking of having\nthose fields as macros in pg_control.h\n#define PG_CONTROL_LATEST_CHECKPOINT_TLI \"Latest checkpoint's TimeLineID:\"\n#define PG_CONTROL_LATEST_CHECKPOINT_NEXTOID \"Latest checkpoint's NextOID:\"\nand so on for all the pg_controldata fields would be a good idea for\nbetter code manageability and not to miss any field text changes.\n\nIf okay, I will discuss this in a separate thread.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 20:21:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 08:21:52PM +0530, Bharath Rupireddy wrote:\n> \n> I don't think we need to change pg_upgrade's ControlData controldata;\n> structure as the information may not be needed there and the while\n> loop there specifically parses/searches for the required\n> pg_controldata output texts. Am I missing something here?\n\nRight, I was remembering that there was a check that all expected fields were\nfound but after double checking I was clearly wrong, sorry about that.\n> \n> > Also, you still didn't fix the possible flag upgrade issue.\n\nUnless I'm missing something that's an issue that you still haven't addressed\nor explained why it's not a problem?\n\n> \n> > Why are you defining CHECKPOINT_KIND_TEXT_LENGTH twice? You\n> > should just define it in some sensible header used by both files, or better\n> > have a new function to take care of that rather than having the code\n> > duplicated.\n> \n> Yeah, added the macro in pg_control.h. I also wanted to have a common\n> function to get checkpoint kind text and place it in\n> controldata_utils.c, but it doesn't have xlog.h included, so no\n> checkpoint flags there, hence I refrained from the common function\n> idea.\n\nThat's a bit annoying, I'm not sure what's best to do here.\n\n\n", "msg_date": "Sat, 29 Jan 2022 00:10:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn 2021-12-08 03:04:23 +0100, Tomas Vondra wrote:\n> On 12/8/21 02:54, Bharath Rupireddy wrote:\n> >> I'm not sure about adding it to control data, though. That doesn't seem\n> >> like a very good match for something that's mostly for monitoring.\n> > \n> > Having it in the control data file (along with the existing checkpoint\n> > information) will be helpful to know what was the last checkpoint\n> > information and we can use the existing pg_control_checkpoint function\n> > or the tool to emit that info. I plan to add an int16 flag as\n> > suggested by Justin in this thread and come up with a patch.\n> > \n> \n> OK, although I'm not sure it's all that useful (if we have that in some\n> sort of system view).\n\nI don't think we should add stuff like this to the control file. We want to\nkeep ControlFileData within a disk sector size / 512 bytes (so that we don't\nneed to care about torn writes etc). Adding information that we don't really\nneed will byte us at some point, because at that point there will be reliance\non the added data.\n\nNor have I read a convincing reason for needing the data to be readily\naccessible when the server is shut down?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:37:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn 2021-12-07 20:06:22 +0530, Bharath Rupireddy wrote:\n> One concern is that we don't want to increase the size of pg_controldata by\n> more than the typical block size (of 8K) to avoid any torn-writes.\n\nThe limit is 512 bytes (a disk sector), not 8K. There are plenty devices with\n4K sectors as well, but I'm not aware of any with 8K.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:39:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "At Fri, 28 Jan 2022 13:49:19 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > - proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz}',\n> > + proallargtypes => '{pg_lsn,pg_lsn,text,int4,int4,bool,text,oid,xid,xid,xid,oid,xid,xid,oid,xid,xid,timestamptz,text}',\n> >\n> > I think the additional column should be text[] instead of text, but\n> > not sure.\n> \n> We are preparing a single string of all the checkpoint kinds and\n> outputting as a text column, so we don't need text[].\n\nWhat I meant to suggest is to change it to an array of text instead of\na text that consists of multiple labels concatenated by spaces.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 Jan 2022 10:50:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "At Sat, 29 Jan 2022 00:10:19 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Fri, Jan 28, 2022 at 08:21:52PM +0530, Bharath Rupireddy wrote:\n> > > Also, you still didn't fix the possible flag upgrade issue.\n> \n> Unless I'm missing something that's an issue that you still haven't addressed\n> or explained why it's not a problem?\n\nAs Bharath said, pg_upgreade reads a part of old cluster's controldata\nthat is needed to check compatibility so I agree that we don't have\nflag upgrade issue I mentioned.\n\n> > > Why are you defining CHECKPOINT_KIND_TEXT_LENGTH twice? You\n> > > should just define it in some sensible header used by both files, or better\n> > > have a new function to take care of that rather than having the code\n> > > duplicated.\n> > \n> > Yeah, added the macro in pg_control.h. I also wanted to have a common\n> > function to get checkpoint kind text and place it in\n> > controldata_utils.c, but it doesn't have xlog.h included, so no\n> > checkpoint flags there, hence I refrained from the common function\n> > idea.\n> \n> That's a bit annoying, I'm not sure what's best to do here.\n\nI misunderstood that the size is restricted to 8k but it is really 512\nbytes as Andres pointed. So we don't add it at least as a text. This\nmeans pg_controldata need to translate the flags into human-readable\ntext but, to be clear, I still don't think its usefull in the control\ndata.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 Jan 2022 11:10:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 31, 2022 at 11:10:45AM +0900, Kyotaro Horiguchi wrote:\n> \n> This means pg_controldata need to translate the flags into human-readable\n> text but, to be clear, I still don't think its usefull in the control\n> data.\n\nI've been saying that since my first email, I also don't see any scenario where\nhaving this specific information can be of any help.\n\nGiven Andres feedback, unless someone can provide some realistic use case I\nthink we should mark this patch as rejected and focus on a\npg_progress_checkpoint feature. I will do so in a couple of days when I will\nclose this commitfest.\n\n\n", "msg_date": "Mon, 31 Jan 2022 11:40:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Mon, Jan 31, 2022 at 9:10 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Jan 31, 2022 at 11:10:45AM +0900, Kyotaro Horiguchi wrote:\n> >\n> > This means pg_controldata need to translate the flags into human-readable\n> > text but, to be clear, I still don't think its usefull in the control\n> > data.\n>\n> I've been saying that since my first email, I also don't see any scenario where\n> having this specific information can be of any help.\n>\n> Given Andres feedback, unless someone can provide some realistic use case I\n> think we should mark this patch as rejected and focus on a\n> pg_progress_checkpoint feature. I will do so in a couple of days when I will\n> close this commitfest.\n\nThe size of ControlFileData is 296 bytes currently and the sector\nlimit is of 512 bytes (PG_CONTROL_MAX_SAFE_SIZE), if we feel that this\nextra 2 bytes of checkpoint flags isn't worth storing in the control\nfile, I'm pretty much okay with it.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 31 Jan 2022 10:58:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 31, 2022 at 10:58:31AM +0530, Bharath Rupireddy wrote:\n> \n> The size of ControlFileData is 296 bytes currently and the sector\n> limit is of 512 bytes (PG_CONTROL_MAX_SAFE_SIZE), if we feel that this\n> extra 2 bytes of checkpoint flags isn't worth storing in the control\n> file, I'm pretty much okay with it.\n\nFor now we have some room, but we will likely keep consuming bytes in that file\nfor more critical features and it's almost certain that at one point we will\nregret wasting 2 bytes for that.\n\n\n", "msg_date": "Mon, 31 Jan 2022 13:54:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Mon, Jan 31, 2022 at 01:54:16PM +0800, Julien Rouhaud wrote:\n> For now we have some room, but we will likely keep consuming bytes in that file\n> for more critical features and it's almost certain that at one point we will\n> regret wasting 2 bytes for that.\n\nAgreed to drop the patch. That's only two bytes, but we may regret\nthat in the future and this is not critical for the system.\n--\nMichael", "msg_date": "Mon, 31 Jan 2022 16:17:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" }, { "msg_contents": "On Mon, Jan 31, 2022 at 12:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 31, 2022 at 01:54:16PM +0800, Julien Rouhaud wrote:\n> > For now we have some room, but we will likely keep consuming bytes in that file\n> > for more critical features and it's almost certain that at one point we will\n> > regret wasting 2 bytes for that.\n>\n> Agreed to drop the patch. That's only two bytes, but we may regret\n> that in the future and this is not critical for the system.\n\nI agree. Thank you all for your review.\n\n\n", "msg_date": "Mon, 31 Jan 2022 13:11:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there a way (except from server logs) to know the kind of\n on-going/last checkpoint?" } ]
[ { "msg_contents": "I've been becoming more and more interested in learning formal methods\nand wanted to find a good project to which I could contribute. Would\nthe development team appreciate someone adding ACSL annotations to the\ncodebase? Are such pull requests likely to be upstreamed? I ask this\nbecause it uses comment syntax to express the specifications and some\npeople dislike that. However, as we all know, there are solid\nadvantages to using formal methods, such as automatic test generation\nand proven absence of runtime errors.\n\nLooking forward to hearing from you!\nColin\n\n\n", "msg_date": "Tue, 7 Dec 2021 18:32:42 +0000", "msg_from": "Colin Gilbert <colingilbert86@gmail.com>", "msg_from_op": true, "msg_subject": "Appetite for Frama-C annotations?" }, { "msg_contents": "Colin Gilbert <colingilbert86@gmail.com> writes:\n> I've been becoming more and more interested in learning formal methods\n> and wanted to find a good project to which I could contribute. Would\n> the development team appreciate someone adding ACSL annotations to the\n> codebase?\n\nMost likely not. It might be interesting to see if it's possible to\ndo anything at all with formal methods in the hairy mess of the Postgres\ncode base ... but I don't think we'd clutter the code with such comments\nunless we thought it'd help the average PG contributor. Which I doubt.\n\n> Are such pull requests likely to be upstreamed?\n\nWe don't do PRs around here --- the project long predates the existence\nof git, nevermind github-based workflows, so we're set in other habits.\nSee\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Dec 2021 10:21:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "On 12/07/21 13:32, Colin Gilbert wrote:\n> I've been becoming more and more interested in learning formal methods\n> and wanted to find a good project to which I could contribute. Would\n> the development team appreciate someone adding ACSL annotations to the\n> codebase?\n\nMy ears perked up ... I have some Frama-C-related notes-to-self from\na couple years ago that I've not yet pursued, with an interest in how\nuseful they could be in the hairy mess of the PL/Java extension.\n\nRight at the moment, I am more invested in a somewhat massive\nrefactoring of the extension. In one sense, tackling the refactoring\nwithout formal tools feels like the wrong order (or working without a net).\nIt's just that there are only so many hours in the day, and the\nrefactoring really can't wait, given the backlog of modern capabilities\n(like polyglot programming) held back by the current structure. And the\ncode base should be less hairy afterward, and maybe more amenable to\nspec annotations.\n\nAccording to OpenHub, PL/Java's implementation is currently 74% Java,\n19% C. A goal of the current refactoring is to skew that ratio more\nheavily Java, with as little C glue remaining as can be achieved.\nMeaning, ultimately, a C-specific framework like Frama-C isn't where\nmost of the fun would be in PL/Java. Still, I'd be interested to see it\nin action.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 8 Dec 2021 11:02:45 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "Hi! Thanks for the quick reply. Are you doing any of this work in a\npublic repository? If so, could we have a link? There is a similar\nidea in Java Modelling Language. It also uses its own annotations to\ndescribe additional requirements. Are you considering to use it? Maybe\nI could help...\n\nOn Wed, 8 Dec 2021 at 16:02, Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 12/07/21 13:32, Colin Gilbert wrote:\n> > I've been becoming more and more interested in learning formal methods\n> > and wanted to find a good project to which I could contribute. Would\n> > the development team appreciate someone adding ACSL annotations to the\n> > codebase?\n>\n> My ears perked up ... I have some Frama-C-related notes-to-self from\n> a couple years ago that I've not yet pursued, with an interest in how\n> useful they could be in the hairy mess of the PL/Java extension.\n>\n> Right at the moment, I am more invested in a somewhat massive\n> refactoring of the extension. In one sense, tackling the refactoring\n> without formal tools feels like the wrong order (or working without a net).\n> It's just that there are only so many hours in the day, and the\n> refactoring really can't wait, given the backlog of modern capabilities\n> (like polyglot programming) held back by the current structure. And the\n> code base should be less hairy afterward, and maybe more amenable to\n> spec annotations.\n>\n> According to OpenHub, PL/Java's implementation is currently 74% Java,\n> 19% C. A goal of the current refactoring is to skew that ratio more\n> heavily Java, with as little C glue remaining as can be achieved.\n> Meaning, ultimately, a C-specific framework like Frama-C isn't where\n> most of the fun would be in PL/Java. Still, I'd be interested to see it\n> in action.\n>\n> Regards,\n> -Chap\n\n\n", "msg_date": "Wed, 8 Dec 2021 17:13:27 +0000", "msg_from": "Colin Gilbert <colingilbert86@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "Thank you very much Tom for your quick reply! If nobody objects to it\ntoo much, I'd focus my work on ensuring full-text-search is\nmemory-safe.\n\nOn Wed, 8 Dec 2021 at 15:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Colin Gilbert <colingilbert86@gmail.com> writes:\n> > I've been becoming more and more interested in learning formal methods\n> > and wanted to find a good project to which I could contribute. Would\n> > the development team appreciate someone adding ACSL annotations to the\n> > codebase?\n>\n> Most likely not. It might be interesting to see if it's possible to\n> do anything at all with formal methods in the hairy mess of the Postgres\n> code base ... but I don't think we'd clutter the code with such comments\n> unless we thought it'd help the average PG contributor. Which I doubt.\n>\n> > Are such pull requests likely to be upstreamed?\n>\n> We don't do PRs around here --- the project long predates the existence\n> of git, nevermind github-based workflows, so we're set in other habits.\n> See\n>\n> https://wiki.postgresql.org/wiki/Developer_FAQ\n> https://wiki.postgresql.org/wiki/Submitting_a_Patch\n>\n> regards, tom lane\n\n\n", "msg_date": "Wed, 8 Dec 2021 17:16:43 +0000", "msg_from": "Colin Gilbert <colingilbert86@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "On 12/08/21 12:13, Colin Gilbert wrote:\n> Hi! Thanks for the quick reply. Are you doing any of this work in a\n> public repository? If so, could we have a link? There is a similar\n> idea in Java Modelling Language. It also uses its own annotations to\n> describe additional requirements. Are you considering to use it? Maybe\n> I could help...\n\nPL/Java's public repository is https://github.com/tada/pljava\n\nMuch of the current refactoring I spoke of is not pushed there yet\n(it is still in the getting-rebased-a-lot stages). There may be\nan initial push of it appearing there in the coming weeks though.\n\nJML is also mentioned in my notes from a couple years back when I was\nbrowsing such tools.\n\nThere is also a PL/Java-specific mailing list at\nhttps://www.postgresql.org/list/pljava-dev/\n\nThe traffic there is not high.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 8 Dec 2021 13:04:25 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "Hi there,\n\nI played a lot with frama-c a long time ago.\nFrama-c annotations are very verbose and the result is highly dependent on\nthe skills of the annotator.\n\nKeeping it up-to-date on such a large project with high-speed development\nwill be, in my very humble opinion, extremely difficult.\nPerhaps on a sub-project like libpq ?\n\n\n-- \nLaurent \"ker2x\" Laborde\n\nOn Wed, Dec 8, 2021 at 3:45 PM Colin Gilbert <colingilbert86@gmail.com>\nwrote:\n\n> I've been becoming more and more interested in learning formal methods\n> and wanted to find a good project to which I could contribute. Would\n> the development team appreciate someone adding ACSL annotations to the\n> codebase? Are such pull requests likely to be upstreamed? I ask this\n> because it uses comment syntax to express the specifications and some\n> people dislike that. However, as we all know, there are solid\n> advantages to using formal methods, such as automatic test generation\n> and proven absence of runtime errors.\n>\n> Looking forward to hearing from you!\n> Colin\n>\n>\n>\n\nHi there, I played a lot with frama-c a long time ago.Frama-c annotations are very verbose and the result is highly dependent on the skills of the annotator.Keeping it up-to-date on such a large project with high-speed development will be, in my very humble opinion, extremely difficult.Perhaps on a sub-project like libpq ?-- Laurent \"ker2x\" LabordeOn Wed, Dec 8, 2021 at 3:45 PM Colin Gilbert <colingilbert86@gmail.com> wrote:I've been becoming more and more interested in learning formal methods\nand wanted to find a good project to which I could contribute. Would\nthe development team appreciate someone adding ACSL annotations to the\ncodebase? Are such pull requests likely to be upstreamed? I ask this\nbecause it uses comment syntax to express the specifications and some\npeople dislike that. However, as we all know, there are solid\nadvantages to using formal methods, such as automatic test generation\nand proven absence of runtime errors.\n\nLooking forward to hearing from you!\nColin", "msg_date": "Thu, 9 Dec 2021 15:00:24 +0100", "msg_from": "Laurent Laborde <kerdezixe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "This is a longer-term plan and I'm going to practice a lot on my own\nprojects prior. I'm just sending out feelers. My main idea was to\nannotate parts of the code that have already been well-established and\nvalidate it while trying to uncover potential edge-cases. I'll do that\nat home until something real comes up. If I find a CVE, I know the\naddress to send that to. An existing project involves other people and\nI cannot expect a free hand to dramatically modify the code everyone\nelse works upon, especially without having proven my worth! I'm\nactually really glad for Laurent's suggestion of checking out libpq; I\nassume it sees the least rework? That might actually be a fine first\ntarget for some bug-hunting.\n\nAs an aside, my desire to validate the FTS is due to its relatively\nexposed nature. This is worrying compared to attacks requiring the\nability to craft special tables, prepared statements, or other things\nonly a developer or admin could do. Would a memory-safe text search\nusing the existing data structure be best implemented as a plugin?\nI've seen adverts for FPGA IP that can access the Postgres data\nstructures directly from memory so that gives me confidence. Chapman\nmentioned polyglot programming; would Postgres ever consider\ndeprecating unsafe features and replacing them with plugins written in\nsomething like Rust or Ada/SPARK? I write this, hoping not to tread on\na landmine.\n\nI appreciate everyone's engagement!\n\n\nColin\n\nOn Thu, 9 Dec 2021 at 14:00, Laurent Laborde <kerdezixe@gmail.com> wrote:\n>\n> Hi there,\n>\n> I played a lot with frama-c a long time ago.\n> Frama-c annotations are very verbose and the result is highly dependent on the skills of the annotator.\n>\n> Keeping it up-to-date on such a large project with high-speed development will be, in my very humble opinion, extremely difficult.\n> Perhaps on a sub-project like libpq ?\n>\n>\n> --\n> Laurent \"ker2x\" Laborde\n>\n> On Wed, Dec 8, 2021 at 3:45 PM Colin Gilbert <colingilbert86@gmail.com> wrote:\n>>\n>> I've been becoming more and more interested in learning formal methods\n>> and wanted to find a good project to which I could contribute. Would\n>> the development team appreciate someone adding ACSL annotations to the\n>> codebase? Are such pull requests likely to be upstreamed? I ask this\n>> because it uses comment syntax to express the specifications and some\n>> people dislike that. However, as we all know, there are solid\n>> advantages to using formal methods, such as automatic test generation\n>> and proven absence of runtime errors.\n>>\n>> Looking forward to hearing from you!\n>> Colin\n>>\n>>\n>\n>\n\n\n", "msg_date": "Thu, 9 Dec 2021 17:53:09 +0000", "msg_from": "Colin Gilbert <colingilbert86@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "On 12/09/21 12:53, Colin Gilbert wrote:\n> ... plugins written in\n> something like Rust or Ada/SPARK? I write this, hoping not to tread on\n\nSome work toward supporting \"deep\" PostgreSQL extensions in Rust\nwas presented at PGCon 2019 [0].\n\nRegards,\n-Chap\n\n\n[0] https://www.pgcon.org/2019/schedule/events/1322.en.html\n\n\n", "msg_date": "Thu, 9 Dec 2021 13:04:13 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n\n> On 12/09/21 12:53, Colin Gilbert wrote:\n>> ... plugins written in\n>> something like Rust or Ada/SPARK? I write this, hoping not to tread on\n>\n> Some work toward supporting \"deep\" PostgreSQL extensions in Rust\n> was presented at PGCon 2019 [0].\n\nThere's also https://github.com/zombodb/pgx/, which seems more complete\nfrom a quick glance.\n\nhttps://depth-first.com/articles/2021/08/25/postgres-extensions-in-rust/\n\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 Dec 2021 18:15:44 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Appetite for Frama-C annotations?" }, { "msg_contents": "Thank you all very much for your inputs! To summarise: I asked about\nthe possibility of adding ACSL annotations to the codebase and the\nresponses ranged from nonplussed on one end of the spectrum to some\ndegree of enthusiasm on the other. It was suggested that libpsql would\nbe a better initial target than the internals. I appreciated the good,\nbrisk discussion and if anyone else has any other ideas please let us\nall know.\n\nRegards,\nColin\n\n\n", "msg_date": "Fri, 10 Dec 2021 23:03:16 +0000", "msg_from": "Colin Gilbert <colingilbert86@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Appetite for Frama-C annotations?" } ]
[ { "msg_contents": "Now that subtests in TAP are supported again, I want to correct the \ngreat historical injustice of 7912f9b7dc9e2d3f6cd81892ef6aa797578e9f06 \nand put those subtests back.\n\nMuch more work like this is possible, of course. I just wanted to get \nthis out of the way since the code was already written and I've had this \non my list for, uh, 7 years.", "msg_date": "Wed, 8 Dec 2021 14:26:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Readd use of TAP subtests" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> Now that subtests in TAP are supported again, I want to correct the\n> great historical injustice of 7912f9b7dc9e2d3f6cd81892ef6aa797578e9f06 \n> and put those subtests back.\n\nThe updated Test::More version requirement also gives us done_testing()\n(added in 0.88), which saves us from the constant maintenance headache\nof updating the test counts every time. Do you fancy switching the\ntests you're modifying anyway to that?\n\n\n- ilmari\n\n\n", "msg_date": "Wed, 08 Dec 2021 13:49:06 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "> On 8 Dec 2021, at 14:49, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> \n>> Now that subtests in TAP are supported again, I want to correct the\n>> great historical injustice of 7912f9b7dc9e2d3f6cd81892ef6aa797578e9f06 \n>> and put those subtests back.\n> \n> The updated Test::More version requirement also gives us done_testing()\n> (added in 0.88), which saves us from the constant maintenance headache\n> of updating the test counts every time. Do you fancy switching the\n> tests you're modifying anyway to that?\n\nWe already call done_testing() in a number of tests, and have done so for a\nnumber of years. src/test/ssl/t/002_scram.pl is one example.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 8 Dec 2021 14:53:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "\nOn 12/8/21 08:26, Peter Eisentraut wrote:\n>\n> Now that subtests in TAP are supported again, I want to correct the\n> great historical injustice of 7912f9b7dc9e2d3f6cd81892ef6aa797578e9f06\n> and put those subtests back.\n>\n> Much more work like this is possible, of course.  I just wanted to get\n> this out of the way since the code was already written and I've had\n> this on my list for, uh, 7 years.\n\n\n+many as long as we cover all the cases in Cluster.pm and Utils.pm. I\nsuspect they have acquired some new multi-test subs in the intervening 7\nyears :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Dec 2021 08:53:59 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 8 Dec 2021, at 14:49, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>> \n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> \n>>> Now that subtests in TAP are supported again, I want to correct the\n>>> great historical injustice of 7912f9b7dc9e2d3f6cd81892ef6aa797578e9f06 \n>>> and put those subtests back.\n>> \n>> The updated Test::More version requirement also gives us done_testing()\n>> (added in 0.88), which saves us from the constant maintenance headache\n>> of updating the test counts every time. Do you fancy switching the\n>> tests you're modifying anyway to that?\n>\n> We already call done_testing() in a number of tests, and have done so for a\n> number of years. src/test/ssl/t/002_scram.pl is one example.\n\nReading the Test::More changelog more closely, it turns out that even\nthough we used to depend on version 0.87, that's effectively equivalent\n0.88, because there was no stable 0.87 release, only 0.86 and\ndevelopment releases 0.87_01 through _03.\n\nEither way, I think we should be switching tests to done_testing()\nwhenever it would otherwise have to adjust the test count, to avoid\nhaving to do that again and again and again going forward.\n\n- ilmari\n\n\n", "msg_date": "Wed, 08 Dec 2021 14:08:05 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "\nOn 12/8/21 09:08, Dagfinn Ilmari Mannsåker wrote:\n>\n> Either way, I think we should be switching tests to done_testing()\n> whenever it would otherwise have to adjust the test count, to avoid\n> having to do that again and again and again going forward.\n>\n\nI'm not so sure. I don't think its necessarily a bad idea to have to\ndeclare how many tests you're going to run. I appreciate it gets hard in\nsome cases, which is why we have now insisted on a Test::More version\nthat supports subtests. I suppose we could just take the attitude that\nwe're happy with however many tests it actually runs, and as long as\nthey all pass we're good. It just seems slightly sloppy.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Dec 2021 09:33:06 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 12/8/21 09:08, Dagfinn Ilmari Mannsåker wrote:\n>>\n>> Either way, I think we should be switching tests to done_testing()\n>> whenever it would otherwise have to adjust the test count, to avoid\n>> having to do that again and again and again going forward.\n>>\n>\n> I'm not so sure. I don't think its necessarily a bad idea to have to\n> declare how many tests you're going to run. I appreciate it gets hard in\n> some cases, which is why we have now insisted on a Test::More version\n> that supports subtests. I suppose we could just take the attitude that\n> we're happy with however many tests it actually runs, and as long as\n> they all pass we're good. It just seems slightly sloppy.\n\nThe point of done_testing() is to additionally assert that the test\nscript ran to completion, so you don't get silent failures if something\nshould end up calling exit(0) prematurely (a non-zero exit status is\nconsidered a failure by the test harness).\n\nThe only cases where an explicit plan adds value is if you're running\ntests in a loop and care about the number of iterations, or have a\ncallback with a test inside that you want to make sure gets called. For\nthese, it's better to explicitly assert that the list you're iterating\nover is of the right length, or increment a counter in the loop or\ncallback and assert that it has the expected value. This has the added\nbenefit of the failure being coming from the relevant place and having a\nhelpful description, rather than a plan mismatch at the end which you\nthen have to hunt down the cause of.\n\n- ilmari\n\n\n", "msg_date": "Wed, 08 Dec 2021 14:48:59 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/8/21 09:08, Dagfinn Ilmari Mannsåker wrote:\n>> Either way, I think we should be switching tests to done_testing()\n>> whenever it would otherwise have to adjust the test count, to avoid\n>> having to do that again and again and again going forward.\n\n> I'm not so sure. I don't think its necessarily a bad idea to have to\n> declare how many tests you're going to run.\n\nI think the main point is to make sure that the test script reached an\nintended exit point, rather than dying early someplace. It's not apparent\nto me why reaching a done_testing() call is a less reliable indicator of\nthat than executing some specific number of tests --- and I agree with\nilmari that maintaining the test count is a serious PITA.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Dec 2021 10:25:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> The only cases where an explicit plan adds value is if you're running\n> tests in a loop and care about the number of iterations, or have a\n> callback with a test inside that you want to make sure gets called. For\n> these, it's better to explicitly assert that the list you're iterating\n> over is of the right length, or increment a counter in the loop or\n> callback and assert that it has the expected value. This has the added\n> benefit of the failure being coming from the relevant place and having a\n> helpful description, rather than a plan mismatch at the end which you\n> then have to hunt down the cause of.\n\nYeah. A different way of stating that is that the test count adds\nsecurity only if you re-derive its proper value from first principles\nevery time you modify the test script. I don't know about you guys,\nbut the only way I've ever adjusted those numbers is to put in whatever\nthe error message said was right. I don't see how that's adding\nanything but make-work; it's certainly not doing much to help verify\nthe script's control flow.\n\nA question that seems pretty relevant here is: what exactly is the\npoint of using the subtest feature, if we aren't especially interested\nin its effect on the overall test count? I can see that it'd have\nvalue when you wanted to use skip_all to control a subset of a test\nrun, but I'm not detecting where is the value-added in the cases in\nPeter's proposed patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Dec 2021 12:31:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On 08.12.21 18:31, Tom Lane wrote:\n> A question that seems pretty relevant here is: what exactly is the\n> point of using the subtest feature, if we aren't especially interested\n> in its effect on the overall test count? I can see that it'd have\n> value when you wanted to use skip_all to control a subset of a test\n> run, but I'm not detecting where is the value-added in the cases in\n> Peter's proposed patch.\n\nIt's useful if you edit a test file and add (what would appear to be) N \ntests and want to update the number.\n\nBut I'm also OK with the done_testing() style, if there are no drawbacks \nto that.\n\nDoes that call into question why we raised the Test::More version to \nbegin with? Or were there other reasons?\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 16:51:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "> On 8 Dec 2021, at 16:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I think the main point is to make sure that the test script reached an\n> intended exit point, rather than dying early someplace. It's not apparent\n> to me why reaching a done_testing() call is a less reliable indicator of\n> that than executing some specific number of tests --- and I agree with\n> ilmari that maintaining the test count is a serious PITA.\n\nFWIW I agree with this and am in favor of using done_testing().\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 21:54:36 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Now that we have switched everything to done_testing(), the subtests \nfeature isn't that relevant anymore, but it might still be useful to get \nbetter output when running with PROVE_FLAGS=--verbose. Compare before:\n\nt/001_basic.pl ..\n1..8\nok 1 - vacuumlo --help exit code 0\nok 2 - vacuumlo --help goes to stdout\nok 3 - vacuumlo --help nothing to stderr\nok 4 - vacuumlo --version exit code 0\nok 5 - vacuumlo --version goes to stdout\nok 6 - vacuumlo --version nothing to stderr\nok 7 - vacuumlo with invalid option nonzero exit code\nok 8 - vacuumlo with invalid option prints error message\nok\nAll tests successful.\nFiles=1, Tests=8, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.08 cusr \n0.05 csys = 0.15 CPU)\nResult: PASS\n\nAfter (with attached patch):\n\nt/001_basic.pl ..\n# Subtest: vacuumlo --help\n ok 1 - exit code 0\n ok 2 - goes to stdout\n ok 3 - nothing to stderr\n 1..3\nok 1 - vacuumlo --help\n# Subtest: vacuumlo --version\n ok 1 - exit code 0\n ok 2 - goes to stdout\n ok 3 - nothing to stderr\n 1..3\nok 2 - vacuumlo --version\n# Subtest: vacuumlo options handling\n ok 1 - invalid option nonzero exit code\n ok 2 - invalid option prints error message\n 1..2\nok 3 - vacuumlo options handling\n1..3\nok\nAll tests successful.\nFiles=1, Tests=3, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.11 cusr \n0.07 csys = 0.21 CPU)\nResult: PASS\n\nI think for deeply nested tests and test routines defined in other \nmodules, this helps structure the test output more like the test source \ncode is laid out, so it makes following the tests and locating failing \ntest code easier.", "msg_date": "Thu, 24 Feb 2022 11:16:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "> On 24 Feb 2022, at 11:16, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I think for deeply nested tests and test routines defined in other modules,\n> this helps structure the test output more like the test source code is laid\n> out, so it makes following the tests and locating failing test code easier.\n\nI don't have any strong opinions on this, but if we go ahead with it I think\nthere should be a short note in src/test/perl/README about when substest could\nbe considered.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 14:20:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Hi,\n\nOn 2022-02-24 11:16:03 +0100, Peter Eisentraut wrote:\n> Now that we have switched everything to done_testing(), the subtests feature\n> isn't that relevant anymore, but it might still be useful to get better\n> output when running with PROVE_FLAGS=--verbose.\n\nI've incidentally played with subtests yesterdays, when porting\nsrc/interfaces/libpq/test/regress.pl to a tap test. Unfortunately it seems\nthat subtests aren't actually specified in the tap format, and that different\nlibraries generate different output formats. The reason this matters somewhat\nis that meson's testrunner can parse tap and give nicer progress / error\nreports. But since subtests aren't in the spec it can't currently parse\nthem...\n\nOpen issue since 2015:\nhttps://github.com/TestAnything/Specification/issues/2\n\nThe perl ecosystem is so moribund :(.\n\n\n> t/001_basic.pl ..\n> # Subtest: vacuumlo --help\n> ok 1 - exit code 0\n> ok 2 - goes to stdout\n> ok 3 - nothing to stderr\n> 1..3\n\nIt's clearly better.\n\nWe can approximate some of it by using is_deeply() and comparing exit, stdout,\nstderr at once. Particularly for helpers like program_help() that are used in\na lot of places.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Feb 2022 07:00:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On 24.02.22 16:00, Andres Freund wrote:\n> I've incidentally played with subtests yesterdays, when porting\n> src/interfaces/libpq/test/regress.pl to a tap test. Unfortunately it seems\n> that subtests aren't actually specified in the tap format, and that different\n> libraries generate different output formats. The reason this matters somewhat\n> is that meson's testrunner can parse tap and give nicer progress / error\n> reports. But since subtests aren't in the spec it can't currently parse\n> them...\n\nOk that's good to know. What exactly happens when it tries to parse \nthem? Does it not count them or does it fail somehow? The way the \noutput is structured\n\nt/001_basic.pl ..\n# Subtest: vacuumlo --help\n ok 1 - exit code 0\n ok 2 - goes to stdout\n ok 3 - nothing to stderr\n 1..3\nok 1 - vacuumlo --help\n\nit appears that it should be able to parse it nonetheless and should \njust count the non-indented lines.\n\n\n", "msg_date": "Fri, 25 Feb 2022 14:39:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "\nOn 2/25/22 08:39, Peter Eisentraut wrote:\n> On 24.02.22 16:00, Andres Freund wrote:\n>> I've incidentally played with subtests yesterdays, when porting\n>> src/interfaces/libpq/test/regress.pl to a tap test. Unfortunately it\n>> seems\n>> that subtests aren't actually specified in the tap format, and that\n>> different\n>> libraries generate different output formats. The reason this matters\n>> somewhat\n>> is that meson's testrunner can parse tap and give nicer progress / error\n>> reports. But since subtests aren't in the spec it can't currently parse\n>> them...\n>\n> Ok that's good to know.  What exactly happens when it tries to parse\n> them?  Does it not count them or does it fail somehow?  The way the\n> output is structured\n>\n> t/001_basic.pl ..\n> # Subtest: vacuumlo --help\n>     ok 1 - exit code 0\n>     ok 2 - goes to stdout\n>     ok 3 - nothing to stderr\n>     1..3\n> ok 1 - vacuumlo --help\n>\n> it appears that it should be able to parse it nonetheless and should\n> just count the non-indented lines.\n\n\nAIUI TAP consumers are supposed to ignore lines they don't understand.\nThe Node TAP setup produces output like this, so perl is hardly alone\nhere. See <https://node-tap.org/docs/api/subtests/>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 25 Feb 2022 09:43:20 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Hi,\n\nOn 2022-02-25 14:39:15 +0100, Peter Eisentraut wrote:\n> On 24.02.22 16:00, Andres Freund wrote:\n> > I've incidentally played with subtests yesterdays, when porting\n> > src/interfaces/libpq/test/regress.pl to a tap test. Unfortunately it seems\n> > that subtests aren't actually specified in the tap format, and that different\n> > libraries generate different output formats. The reason this matters somewhat\n> > is that meson's testrunner can parse tap and give nicer progress / error\n> > reports. But since subtests aren't in the spec it can't currently parse\n> > them...\n>\n> Ok that's good to know. What exactly happens when it tries to parse them?\n> Does it not count them or does it fail somehow? The way the output is\n> structured\n\nSays that it can't pase a line of the tap output:\n16:06:55 MALLOC_PERTURB_=156 /usr/bin/perl /tmp/meson-test/build/../subtest.pl\n----------------------------------- output -----------------------------------\nstdout:\n# Subtest: a\n ok 1 - a: a\n ok 2 - a: b\n 1..2\nok 1 - a\n1..1\nstderr:\n\nTAP parsing error: unexpected input at line 4\n------------------------------------------------------------------------------\n\n\n> t/001_basic.pl ..\n> # Subtest: vacuumlo --help\n> ok 1 - exit code 0\n> ok 2 - goes to stdout\n> ok 3 - nothing to stderr\n> 1..3\n> ok 1 - vacuumlo --help\n>\n> it appears that it should be able to parse it nonetheless and should just\n> count the non-indented lines.\n\nIt looks like it's not ignoring indented lines...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Feb 2022 08:26:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On Fri, 2022-02-25 at 09:43 -0500, Andrew Dunstan wrote:\r\n> AIUI TAP consumers are supposed to ignore lines they don't understand.\r\n\r\nIt's undefined behavior [1]:\r\n\r\n> Any output that is not a version, a plan, a test line, a YAML block,\r\n> a diagnostic or a bail out is incorrect. How a harness handles the\r\n> incorrect line is undefined. Test::Harness silently ignores incorrect\r\n> lines, but will become more stringent in the future. TAP::Harness\r\n> reports TAP syntax errors at the end of a test run.\r\n\r\n--Jacob\r\n\r\n[1] https://testanything.org/tap-version-13-specification.html\r\n", "msg_date": "Fri, 25 Feb 2022 16:35:39 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On Fri, 2022-02-25 at 16:35 +0000, Jacob Champion wrote:\r\n> On Fri, 2022-02-25 at 09:43 -0500, Andrew Dunstan wrote:\r\n> > AIUI TAP consumers are supposed to ignore lines they don't understand.\r\n> \r\n> It's undefined behavior [1]:\r\n\r\nAnd of course the minute I send this I notice that I've linked the v13\r\nspec instead of the original... sorry. Assuming Perl isn't marking its\r\ntests as version 13, you are correct:\r\n\r\n> Any output line that is not a version, a plan, a test line, a\r\n> diagnostic or a bail out is considered an “unknown” line. A TAP\r\n> parser is required to not consider an unknown line as an error but\r\n> may optionally choose to capture said line and hand it to the test\r\n> harness, which may have custom behavior attached. This is to allow\r\n> for forward compatability. Test::Harness silently ignores incorrect\r\n> lines, but will become more stringent in the future. TAP::Harness\r\n> reports TAP syntax errors at the end of a test run.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 25 Feb 2022 16:38:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "Hi,\n\nOn 2022-02-25 09:43:20 -0500, Andrew Dunstan wrote:\n> AIUI TAP consumers are supposed to ignore lines they don't understand.\n\nAre they?\n\nIn http://testanything.org/tap-version-13-specification.html there's:\n\n\"Lines written to standard output matching /^(not )?ok\\b/ must be interpreted\nas test lines. [...]All other lines must not be considered test output.\"\n\nThat says that all other lines aren't \"test ouput\". But what does that mean?\nIt certainly doesn't mean they can just be ignored, because obviously\n^(TAP version|#|1..|Bail out) isn't to be ignored.\n\nAnd then there's:\n\n\n\"\nAnything else\n\n Any output that is not a version, a plan, a test line, a YAML block, a\n diagnostic or a bail out is incorrect. How a harness handles the incorrect\n line is undefined. Test::Harness silently ignores incorrect lines, but will\n become more stringent in the future. TAP::Harness reports TAP syntax errors at\n the end of a test run.\n\"\n\nIf I were to to implement a tap parser this wouldn't make me ignore lines.\n\n\nContrasting to that:\nhttp://testanything.org/tap-specification.html\n\n\"\nAnything else\n\n A TAP parser is required to not consider an unknown line as an error but may\n optionally choose to capture said line and hand it to the test harness,\n which may have custom behavior attached. This is to allow for forward\n compatability. Test::Harness silently ignores incorrect lines, but will\n become more stringent in the future. TAP::Harness reports TAP syntax errors\n at the end of a test run.\n\"\n\nI honestly don't know what to make of that. Parsers are supposed to ignore\nunknown lines, Test::Harness silently ignores, but also \"TAP::Harness reports\nTAP syntax errors\"? This seems like a self contradictory mess.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Feb 2022 08:41:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "\nOn 2/25/22 11:41, Andres Freund wrote:\n> Hi,\n>\n> On 2022-02-25 09:43:20 -0500, Andrew Dunstan wrote:\n>> AIUI TAP consumers are supposed to ignore lines they don't understand.\n> Are they?\n>\n> In http://testanything.org/tap-version-13-specification.html there's:\n>\n> \"Lines written to standard output matching /^(not )?ok\\b/ must be interpreted\n> as test lines. [...]All other lines must not be considered test output.\"\n>\n> That says that all other lines aren't \"test ouput\". But what does that mean?\n> It certainly doesn't mean they can just be ignored, because obviously\n> ^(TAP version|#|1..|Bail out) isn't to be ignored.\n\n\n\nI don't think we're following spec 13.\n\n\n>\n> And then there's:\n>\n>\n> \"\n> Anything else\n>\n> Any output that is not a version, a plan, a test line, a YAML block, a\n> diagnostic or a bail out is incorrect. How a harness handles the incorrect\n> line is undefined. Test::Harness silently ignores incorrect lines, but will\n> become more stringent in the future. TAP::Harness reports TAP syntax errors at\n> the end of a test run.\n> \"\n>\n> If I were to to implement a tap parser this wouldn't make me ignore lines.\n>\n>\n> Contrasting to that:\n> http://testanything.org/tap-specification.html\n>\n> \"\n> Anything else\n>\n> A TAP parser is required to not consider an unknown line as an error but may\n> optionally choose to capture said line and hand it to the test harness,\n> which may have custom behavior attached. This is to allow for forward\n> compatability. Test::Harness silently ignores incorrect lines, but will\n> become more stringent in the future. TAP::Harness reports TAP syntax errors\n> at the end of a test run.\n> \"\n>\n> I honestly don't know what to make of that. Parsers are supposed to ignore\n> unknown lines, Test::Harness silently ignores, but also \"TAP::Harness reports\n> TAP syntax errors\"? This seems like a self contradictory mess.\n>\n\nI agree it's a mess. Both of these \"specs\" are incredibly loose. I guess\nthat happens when the spec comes after the implementation.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 25 Feb 2022 12:28:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On 25.02.22 17:26, Andres Freund wrote:\n>> Ok that's good to know. What exactly happens when it tries to parse them?\n>> Does it not count them or does it fail somehow? The way the output is\n>> structured\n> Says that it can't pase a line of the tap output:\n\nOk, then I suppose I'm withdrawing this.\n\nPerhaps in another 7 years or so this will be resolved and we can make \nanother attempt at this. ;-)\n\n\n", "msg_date": "Mon, 28 Feb 2022 17:02:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "On Mon, 2022-02-28 at 17:02 +0100, Peter Eisentraut wrote:\r\n> Perhaps in another 7 years or so this will be resolved and we can make \r\n> another attempt at this. ;-)\r\n\r\nFor what it's worth, the TAP 14 spec was officially released today:\r\n\r\n https://testanything.org/tap-version-14-specification.html\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 19 Apr 2022 19:07:35 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" }, { "msg_contents": "> On 19 Apr 2022, at 21:07, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2022-02-28 at 17:02 +0100, Peter Eisentraut wrote:\n>> Perhaps in another 7 years or so this will be resolved and we can make \n>> another attempt at this. ;-)\n> \n> For what it's worth, the TAP 14 spec was officially released today:\n\nInteresting. I hadn't even registered that a v14 was in the works, and come to\nthink of it I'm not sure I've yet seen a v13 consumer or producer. For the TAP\nsupport in pg_regress I've kept to the original spec.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 19 Apr 2022 21:14:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Readd use of TAP subtests" } ]
[ { "msg_contents": "Hi,\r\n\r\n I run the following SQL in Postgres (14_STABLE), and got the results:\r\nzlyu=# create table t1(a int, b int);\r\nCREATE TABLE\r\nzlyu=# create table t2(a int, b int);\r\nCREATE TABLE\r\nzlyu=# insert into t1 values (null, 1);\r\nINSERT 0 1\r\nzlyu=# insert into t2 values (1, 1);\r\nINSERT 0 1\r\nzlyu=# select * from t1 where (a, b) not in (select * from t2);\r\n a | b\r\n---+---\r\n(0 rows)\r\n\r\nzlyu=# select * from t1 where (a, b) in (select * from t2);\r\n a | b\r\n---+---\r\n(0 rows)\r\n\r\nzlyu=# select * from t1 where array[a, b] in (select array[a,b] from t2);\r\n a | b\r\n---+---\r\n(0 rows)\r\n\r\nzlyu=# select * from t1 where array[a, b] not in (select array[a,b] from t2);\r\n a | b\r\n---+---\r\n | 1\r\n(1 row)\r\n\r\nI run the SQL without array expr​ in other DBs(orcale, sqlite, ...), they all behave\r\nthe same as Postgres.\r\n\r\nIt seems a bit confusing for me that 'not in' and 'in' the same subquery both return 0\r\nrows, but the table contains data.\r\n\r\nAlso, manually using array expression behaves differently from the first SQL. For not in case,\r\nI step in the code, and find array_eq will consider null = null as true, however ExecSubPlan will\r\nconsider null as unprovable and exclude that row.\r\n\r\nHow to understand the result? It seems SQL standard does not mention array operation for null\r\nvalue.\r\n\r\nThanks!\r\n\r\n\n\n\n\n\n\n\n\r\nHi,\n\r\n    \n\r\n   I run the following SQL in Postgres (14_STABLE), and got the results:  \n\n\r\nzlyu=# create table t1(a int, b int);\r\nCREATE TABLE\nzlyu=# create table t2(a int, b int);\nCREATE TABLE\nzlyu=# insert into t1 values (null, 1);\nINSERT 0 1\nzlyu=# insert into t2 values (1, 1);\nINSERT 0 1\nzlyu=# select * from t1 where (a, b) not in (select * from t2);\n a | b\n---+---\n(0 rows)\n\n\nzlyu=# select * from t1 where (a, b) in (select * from t2);\n a | b\n---+---\n(0 rows)\n\n\nzlyu=# select * from t1 where array[a, b] in (select array[a,b] from t2);\r\n a | b\n---+---\n(0 rows)\n\n\nzlyu=# select * from t1 where array[a, b] not in (select array[a,b] from t2);\n a | b\n---+---\n   | 1\n\n(1 row)\n\n\n\n\nI run the SQL without array expr​ in other DBs(orcale, sqlite, ...), they all behave\n\nthe same as Postgres.\n\n\nIt seems a bit confusing for me that 'not in' and 'in' the same subquery both return 0\nrows, but the table contains data.\n\n\nAlso, manually using array expression behaves differently from the first SQL. For not in case,\nI step in the code, and find array_eq\r\n will consider null = null as true, however ExecSubPlan will\nconsider null as unprovable and exclude that row.\n\n\nHow to understand the result? It seems SQL standard does not mention array operation for null\nvalue.\n\n\nThanks!", "msg_date": "Wed, 8 Dec 2021 15:15:22 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Question on not-in and array-eq" }, { "msg_contents": "On Wed, Dec 8, 2021 at 8:15 AM Zhenghua Lyu <zlyu@vmware.com> wrote:\n\n> I run the SQL without array expr in other DBs(orcale, sqlite, ...), they\n> all behave\n> the same as Postgres.\n>\n> It seems a bit confusing for me that 'not in' and 'in' the same subquery\n> both return 0\n> rows, but the table contains data.\n>\n\nBecause of this dynamic the reliable negation of \"in\" is \"not (... in ...)\"\nas opposed to \"not in\".\n\nhttps://www.postgresql.org/docs/current/functions-comparison.html\n\n\"If the expression is row-valued, then IS NULL is true when the row\nexpression itself is null or when all the row's fields are null, while IS\nNOT NULL is true when the row expression itself is non-null and all the\nrow's fields are non-null. Because of this behavior, IS NULL and IS NOT\nNULL do not always return inverse results for row-valued expressions; in\nparticular, a row-valued expression that contains both null and non-null\nfields will return false for both tests.\"\n\nThe implications of the IS NULL treatment extends to equality checks and\nthus the \"[NOT] IN\" expression.\n\nAlso, manually using array expression behaves differently from the first\n> SQL. For not in case,\n> I step in the code, and find array_eq will consider null = null as true,\n> however ExecSubPlan will\n> consider null as unprovable and exclude that row.\n>\n> How to understand the result? It seems SQL standard does not mention array\n> operation for null\n> value.\n>\n\nWhen comparing two non-null array variables the result will be either true\nor false. If either of the array variables, as a whole, is null the result\nwill be null. This is due to the general rule that operations on null\nvalues result in null. And the general desire to make array comparisons\nbehave in the manner expected by users as opposed to the surprising result\nthat row-valued values provide. The two simply are defined to behave\ndifferently - mainly due to the fact that for row-valued data we choose to\nadhere to the SQL Standard.\n\nDavid J.\n\nOn Wed, Dec 8, 2021 at 8:15 AM Zhenghua Lyu <zlyu@vmware.com> wrote:\nI run the SQL without array expr in other DBs(orcale, sqlite, ...), they all behave\n\nthe same as Postgres.\n\n\nIt seems a bit confusing for me that 'not in' and 'in' the same subquery both return 0\nrows, but the table contains data.Because of this dynamic the reliable negation of \"in\" is \"not (... in ...)\" as opposed to \"not in\".https://www.postgresql.org/docs/current/functions-comparison.html\"If the expression is row-valued, then IS NULL is true when the row expression itself is null or when all the row's fields are null, while IS NOT NULL is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior, IS NULL and IS NOT NULL do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests.\"The implications of the IS NULL treatment extends to equality checks and thus the \"[NOT] IN\" expression.\nAlso, manually using array expression behaves differently from the first SQL. For not in case,\nI step in the code, and find array_eq\n will consider null = null as true, however ExecSubPlan will\nconsider null as unprovable and exclude that row.\n\n\nHow to understand the result? It seems SQL standard does not mention array operation for null\nvalue.When comparing two non-null array variables the result will be either true or false.  If either of the array variables, as a whole, is null the result will be null.  This is due to the general rule that operations on null values result in null.  And the general desire to make array comparisons behave in the manner expected by users as opposed to the surprising result that row-valued values provide.  The two simply are defined to behave differently - mainly due to the fact that for row-valued data we choose to adhere to the SQL Standard.David J.", "msg_date": "Wed, 8 Dec 2021 08:39:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question on not-in and array-eq" }, { "msg_contents": "Thanks for your explanation.\n________________________________\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Wednesday, December 8, 2021 11:39 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Question on not-in and array-eq\n\nOn Wed, Dec 8, 2021 at 8:15 AM Zhenghua Lyu <zlyu@vmware.com<mailto:zlyu@vmware.com>> wrote:\nI run the SQL without array expr in other DBs(orcale, sqlite, ...), they all behave\nthe same as Postgres.\n\nIt seems a bit confusing for me that 'not in' and 'in' the same subquery both return 0\nrows, but the table contains data.\n\nBecause of this dynamic the reliable negation of \"in\" is \"not (... in ...)\" as opposed to \"not in\".\n\nhttps://www.postgresql.org/docs/current/functions-comparison.html<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Fcurrent%2Ffunctions-comparison.html&data=04%7C01%7Czlyu%40vmware.com%7C2213729fc82044f4f8f808d9ba60fa8d%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637745747952431673%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2F6eeHObPA3NZ5zDjQlTISgnjXrinLQojBtdtpFzJ018%3D&reserved=0>\n\n\"If the expression is row-valued, then IS NULL is true when the row expression itself is null or when all the row's fields are null, while IS NOT NULL is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior, IS NULL and IS NOT NULL do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests.\"\n\nThe implications of the IS NULL treatment extends to equality checks and thus the \"[NOT] IN\" expression.\n\nAlso, manually using array expression behaves differently from the first SQL. For not in case,\nI step in the code, and find array_eq will consider null = null as true, however ExecSubPlan will\nconsider null as unprovable and exclude that row.\n\nHow to understand the result? It seems SQL standard does not mention array operation for null\nvalue.\n\nWhen comparing two non-null array variables the result will be either true or false. If either of the array variables, as a whole, is null the result will be null. This is due to the general rule that operations on null values result in null. And the general desire to make array comparisons behave in the manner expected by users as opposed to the surprising result that row-valued values provide. The two simply are defined to behave differently - mainly due to the fact that for row-valued data we choose to adhere to the SQL Standard.\n\nDavid J.\n\n\n\n\n\n\n\n\n\n\n\nThanks for your explanation.\n\n\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Wednesday, December 8, 2021 11:39 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Question on not-in and array-eq\n \n\n\n\n\nOn Wed, Dec 8, 2021 at 8:15 AM Zhenghua Lyu <zlyu@vmware.com> wrote:\n\n\n\n\n\nI run the SQL without array expr in other DBs(orcale, sqlite, ...), they all behave\n\nthe same as Postgres.\n\n\nIt seems a bit confusing for me that 'not in' and 'in' the same subquery both return 0\nrows, but the table contains data.\n\n\n\n\n\n\nBecause of this dynamic the reliable negation of \"in\" is \"not (... in ...)\" as opposed to \"not in\".\n\n\n\nhttps://www.postgresql.org/docs/current/functions-comparison.html\n\n\n\n\"If the expression is row-valued, then IS NULL is true when the row expression itself is null or when all the row's fields are null, while IS NOT NULL is true when the row expression\n itself is non-null and all the row's fields are non-null. Because of this behavior, IS NULL and IS NOT NULL do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields\n will return false for both tests.\"\n\n\n\nThe implications of the IS NULL treatment extends to equality checks and thus the \"[NOT] IN\" expression.\n\n\n\n\n\nAlso, manually using array expression behaves differently from the first SQL. For not in case,\nI step in the code, and find array_eq will consider null\n = null as true, however ExecSubPlan will\nconsider null as unprovable and exclude that row.\n\n\nHow to understand the result? It seems SQL standard does not mention array operation for null\nvalue.\n\n\n\n\n\nWhen comparing two non-null array variables the result will be either true or false.  If either of the array variables, as a whole, is null the result will be null.  This is due to\n the general rule that operations on null values result in null.  And the general desire to make array comparisons behave in the manner expected by users as opposed to the surprising result that row-valued values provide.  The two simply are defined to behave\n differently - mainly due to the fact that for row-valued data we choose to adhere to the SQL Standard.\n\n\nDavid J.", "msg_date": "Thu, 9 Dec 2021 01:20:32 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Question on not-in and array-eq" } ]
[ { "msg_contents": "A question I always have, and I didn´t find anybody answering it. If it´s\npossible\nselect * from MyDB.MySchema.MyTable;\n\nAnd from user point of view ... all databases are accessible for the same\npostgres instance, user just says connect to this or that database, why is\nit not possible to do\nselect * from FirstDB.FirstSchema.FirstTable join SecondDB.SecondSchema.\nSecondTable;\n\nEverything I found was how to connect, using FDW or DBLink, but not why.\n\nregards,\nMarcos\n\nA question I always have, and I didn´t find anybody answering it. If it´s possible select * from MyDB.MySchema.MyTable;And from user point of view ... all databases are accessible for the same postgres instance, user just says connect to this or that database, why is it not possible to doselect * from FirstDB.FirstSchema.FirstTable join SecondDB.SecondSchema.\n\nSecondTable;Everything I found was how to connect, using FDW or DBLink, but not why.regards,Marcos", "msg_date": "Wed, 8 Dec 2021 15:09:42 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Cross DB query" }, { "msg_contents": "On Wednesday, December 8, 2021, Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> A question I always have, and I didn´t find anybody answering it. If it´s\n> possible\n> select * from MyDB.MySchema.MyTable;\n>\n\nNo, if you specify MyDB is must match the database you’ve chosen to log\ninto.\n\n\n> Everything I found was how to connect, using FDW or DBLink, but not why.\n>\n>\nBecause someone decades ago made that decision and all of the internals\nrely upon it, making a change cost prohibitive. In short, we allow the\nsyntax because that standard says we should but its not very useful beyond\nthat.\n\nI’m pretty sure I’ve seen this in the documentation but a quick glance\ndidn’t turn it up. Experimentation does prove that it works in this manner\nthough.\n\nDavid J.\n\nOn Wednesday, December 8, 2021, Marcos Pegoraro <marcos@f10.com.br> wrote:A question I always have, and I didn´t find anybody answering it. If it´s possible select * from MyDB.MySchema.MyTable;No, if you specify MyDB is must match the database you’ve chosen to log into.Everything I found was how to connect, using FDW or DBLink, but not why.Because someone decades ago made that decision and all of the internals rely upon it, making a change cost prohibitive.  In short, we allow the syntax because that standard says we should but its not very useful beyond that.I’m pretty sure I’ve seen this in the documentation but a quick glance didn’t turn it up.  Experimentation does prove that it works in this manner though.David J.", "msg_date": "Wed, 8 Dec 2021 11:29:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cross DB query" } ]
[ { "msg_contents": "Hi.\n\nSome regex exposed a bunch of typos scattered across PG comments and docs.\n\nThey are all of the \"uses-an-instead-of-a\" (or vice versa) variety.\n\nPSA a small patch to fix them.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 9 Dec 2021 07:30:48 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Thu, Dec 09, 2021 at 07:30:48AM +1100, Peter Smith wrote:\n> Some regex exposed a bunch of typos scattered across PG comments and docs.\n> \n> They are all of the \"uses-an-instead-of-a\" (or vice versa) variety.\n>\n> PSA a small patch to fix them.\n\nGood catches.\n\n- # safe: cross compilers may not add the suffix if given an `-o'\n+ # safe: cross compilers may not add the suffix if given a `-o'\n # argument, so we may need to know it at that point already.\nOn this one, I think that you are right, and I can see that this is\nthe most common practice (aka grep --oids). But my brain also tells\nme that this is not completely wrong either. Thoughts?\n--\nMichael", "msg_date": "Thu, 9 Dec 2021 09:12:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Wed, Dec 8, 2021 at 5:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Dec 09, 2021 at 07:30:48AM +1100, Peter Smith wrote:\n>\n> - # safe: cross compilers may not add the suffix if given an `-o'\n> + # safe: cross compilers may not add the suffix if given a `-o'\n> # argument, so we may need to know it at that point already.\n> On this one, I think that you are right, and I can see that this is\n> the most common practice (aka grep --oids). But my brain also tells\n> me that this is not completely wrong either. Thoughts?\n>\n>\nI would read that aloud most comfortably using \"an\". I found an article\nthat seems to further support this since it both sounds like a vowel (oh)\nand is also a letter (oh).\n\nhttps://www.grammar.com/a-vs-an-when-to-use\n\nDavid J.\n\nOn Wed, Dec 8, 2021 at 5:12 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Dec 09, 2021 at 07:30:48AM +1100, Peter Smith wrote:\n-   # safe: cross compilers may not add the suffix if given an `-o'\n+   # safe: cross compilers may not add the suffix if given a `-o'\n    # argument, so we may need to know it at that point already.\nOn this one, I think that you are right, and I can see that this is\nthe most common practice (aka grep --oids).  But my brain also tells\nme that this is not completely wrong either.  Thoughts?I would read that aloud most comfortably using \"an\".  I found an article that seems to further support this since it both sounds like a vowel (oh) and is also a letter (oh).https://www.grammar.com/a-vs-an-when-to-useDavid J.", "msg_date": "Wed, 8 Dec 2021 17:25:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:25 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>> - # safe: cross compilers may not add the suffix if given an `-o'\n>> + # safe: cross compilers may not add the suffix if given a `-o'\n>> # argument, so we may need to know it at that point already.\n>> On this one, I think that you are right, and I can see that this is\n>> the most common practice (aka grep --oids). But my brain also tells\n>> me that this is not completely wrong either. Thoughts?\n>>\n>\n> I would read that aloud most comfortably using \"an\". I found an article that seems to further support this since it both sounds like a vowel (oh) and is also a letter (oh).\n>\n> https://www.grammar.com/a-vs-an-when-to-use\n>\n\nWhat about the \"-\" before the \"o\"?\nWouldn't it be read as \"dash o\" or \"minus o\"? This would mean \"a\" is\ncorrect, not \"an\", IMHO.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:32:49 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Dec 09, 2021 at 07:30:48AM +1100, Peter Smith wrote:\n> > Some regex exposed a bunch of typos scattered across PG comments and docs.\n> >\n> > They are all of the \"uses-an-instead-of-a\" (or vice versa) variety.\n> >\n> > PSA a small patch to fix them.\n>\n> Good catches.\n>\n> - # safe: cross compilers may not add the suffix if given an `-o'\n> + # safe: cross compilers may not add the suffix if given a `-o'\n> # argument, so we may need to know it at that point already.\n> On this one, I think that you are right, and I can see that this is\n> the most common practice (aka grep --oids). But my brain also tells\n> me that this is not completely wrong either. Thoughts?\n\nPersonally. I read \"-o\" as \"dash oh\" and so the \"a\" instead of \"an\"\nseemed right for me.\n\nYMMV.\n\n(But it is not worth spending more than 30 seconds debating on this so\nI really don't care whatever is decided)\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:36:52 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Wed, Dec 8, 2021 at 5:32 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> On Thu, Dec 9, 2021 at 11:25 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> >> - # safe: cross compilers may not add the suffix if given an `-o'\n> >> + # safe: cross compilers may not add the suffix if given a `-o'\n> >> # argument, so we may need to know it at that point already.\n> >> On this one, I think that you are right, and I can see that this is\n> >> the most common practice (aka grep --oids). But my brain also tells\n> >> me that this is not completely wrong either. Thoughts?\n> >>\n> >\n> > I would read that aloud most comfortably using \"an\". I found an article\n> that seems to further support this since it both sounds like a vowel (oh)\n> and is also a letter (oh).\n> >\n> > https://www.grammar.com/a-vs-an-when-to-use\n> >\n>\n> What about the \"-\" before the \"o\"?\n> Wouldn't it be read as \"dash o\" or \"minus o\"? This would mean \"a\" is\n> correct, not \"an\", IMHO.\n>\n\nYeah, I was treating the leading dash as being silent...the syntax dash(es)\nfor single and multi-character arguments seems unimportant to read aloud in\nthe general sense. If one does read them then yes, \"a\" is correct.\nLacking any documented preference I would then just go with what is\nprevalent in existing usage.\n\nDavid J.\n\nOn Wed, Dec 8, 2021 at 5:32 PM Greg Nancarrow <gregn4422@gmail.com> wrote:On Thu, Dec 9, 2021 at 11:25 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>> -   # safe: cross compilers may not add the suffix if given an `-o'\n>> +   # safe: cross compilers may not add the suffix if given a `-o'\n>>     # argument, so we may need to know it at that point already.\n>> On this one, I think that you are right, and I can see that this is\n>> the most common practice (aka grep --oids).  But my brain also tells\n>> me that this is not completely wrong either.  Thoughts?\n>>\n>\n> I would read that aloud most comfortably using \"an\".  I found an article that seems to further support this since it both sounds like a vowel (oh) and is also a letter (oh).\n>\n> https://www.grammar.com/a-vs-an-when-to-use\n>\n\nWhat about the \"-\" before the \"o\"?\nWouldn't it be read as \"dash o\" or \"minus o\"? This would mean \"a\" is\ncorrect, not \"an\", IMHO.Yeah, I was treating the leading dash as being silent...the syntax dash(es) for single and multi-character arguments seems unimportant to read aloud in the general sense.  If one does read them then yes, \"a\" is correct.  Lacking any documented preference I would then just go with what is prevalent in existing usage.David J.", "msg_date": "Wed, 8 Dec 2021 17:47:39 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Wed, Dec 08, 2021 at 05:47:39PM -0700, David G. Johnston wrote:\n> Yeah, I was treating the leading dash as being silent...the syntax dash(es)\n> for single and multi-character arguments seems unimportant to read aloud in\n> the general sense. If one does read them then yes, \"a\" is correct.\n> Lacking any documented preference I would then just go with what is\n> prevalent in existing usage.\n\nInteresting, I would have thought that the dash should be silent.\nAnyway, I missed that as this comes from ./configure we don't need to\nchange anything as this file is generated by autoconf. I have applied\nthe rest.\n--\nMichael", "msg_date": "Thu, 9 Dec 2021 15:22:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" }, { "msg_contents": "On Thu, Dec 9, 2021 at 5:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 08, 2021 at 05:47:39PM -0700, David G. Johnston wrote:\n> > Yeah, I was treating the leading dash as being silent...the syntax dash(es)\n> > for single and multi-character arguments seems unimportant to read aloud in\n> > the general sense. If one does read them then yes, \"a\" is correct.\n> > Lacking any documented preference I would then just go with what is\n> > prevalent in existing usage.\n>\n> Interesting, I would have thought that the dash should be silent.\n> Anyway, I missed that as this comes from ./configure we don't need to\n> change anything as this file is generated by autoconf. I have applied\n> the rest.\n\nThanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 9 Dec 2021 19:05:18 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix typos - \"an\" instead of \"a\"" } ]
[ { "msg_contents": "Hi,\n\nWith the meson patch applied the tests on windows run concurrently by\ndefault. Unfortunately that fails semi-regularly. The reason for this\nbasically is that windows defaults to using TCP in tests, and that the\ntap-test port determination is very racy:\n\n> # When selecting a port, we look for an unassigned TCP port number,\n> # even if we intend to use only Unix-domain sockets. This is clearly\n> # necessary on $use_tcp (Windows) configurations, and it seems like a\n> # good idea on Unixen as well.\n> $port = get_free_port();\n> ...\n> =item get_free_port()\n>\n> Locate an unprivileged (high) TCP port that's not currently bound to\n> anything. This is used by C<new()>, and also by some test cases that need to\n> start other, non-Postgres servers.\n>\n> Ports assigned to existing PostgreSQL::Test::Cluster objects are automatically\n> excluded, even if those servers are not currently running.\n>\n> XXX A port available now may become unavailable by the time we start\n> the desired service.\n\nI don't think there's an easy way to make this race-free. We'd need to teach\npostmaster to use pre-opened socket or something like that.\n\nAn alternative to that would be to specify a base port number externally. In\nthe meson branch I already did that for the pg_regress style tests, since they\ndon't have the automatic port thing above. But for tap tests there's currently\nno way to pass in a base-port that I can see.\n\n\nIs it perhaps time to to use unix sockets on windows by default\n(i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Dec 2021 14:45:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "port conflicts when running tests concurrently on windows." }, { "msg_contents": "Hi,\n\nOn 2021-12-08 14:45:50 -0800, Andres Freund wrote:\n> Is it perhaps time to to use unix sockets on windows by default\n> (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n\nOn its own PG_TEST_USE_UNIX_SOCKETS doesn't work at all on windows - it fails\ntrying to use /tmp/ as a socket directory. Using PG_REGRESS_SOCK_DIR fixes\nthat for PG_REGRESS. But the tap tests don't look at that :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Dec 2021 16:36:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "Hi,\n\nOn 2021-12-08 16:36:14 -0800, Andres Freund wrote:\n> On 2021-12-08 14:45:50 -0800, Andres Freund wrote:\n> > Is it perhaps time to to use unix sockets on windows by default\n> > (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n> \n> On its own PG_TEST_USE_UNIX_SOCKETS doesn't work at all on windows - it fails\n> trying to use /tmp/ as a socket directory. Using PG_REGRESS_SOCK_DIR fixes\n> that for PG_REGRESS. But the tap tests don't look at that :(.\n\nThe tap failures in turn are caused by the Cluster.pm choosing a socket\ndirectory with backslashes. Those backslashes are then treated as an escape\ncharacter both by guc.c and libpq.\n\nI think this can be addressed by something like\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\nindex 9467a199c8f..c2a8487bbab 100644\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -119,7 +119,15 @@ INIT\n \t$use_tcp = !$PostgreSQL::Test::Utils::use_unix_sockets;\n \t$test_localhost = \"127.0.0.1\";\n \t$last_host_assigned = 1;\n-\t$test_pghost = $use_tcp ? $test_localhost : PostgreSQL::Test::Utils::tempdir_short;\n+\tif ($use_tcp)\n+\t{\n+\t\t$test_pghost = $test_localhost;\n+\t}\n+\telse\n+\t{\n+\t\t$test_pghost = PostgreSQL::Test::Utils::tempdir_short;\n+\t\t$test_pghost =~ s!\\\\!/!g if $PostgreSQL::Test::Utils::windows_os;\n+\t}\n \t$ENV{PGHOST} = $test_pghost;\n \t$ENV{PGDATABASE} = 'postgres';\n \n\nI wonder if we need a host2unix() helper accompanying perl2host()? Seems nicer\nthan sprinkling s!\\\\!/!g if $PostgreSQL::Test::Utils::windows_os in a growing\nnumber of places...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Dec 2021 17:03:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "Hi,\n\nOn 2021-12-08 17:03:07 -0800, Andres Freund wrote:\n> On 2021-12-08 16:36:14 -0800, Andres Freund wrote:\n> > On 2021-12-08 14:45:50 -0800, Andres Freund wrote:\n> > > Is it perhaps time to to use unix sockets on windows by default\n> > > (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n> > \n> > On its own PG_TEST_USE_UNIX_SOCKETS doesn't work at all on windows - it fails\n> > trying to use /tmp/ as a socket directory. Using PG_REGRESS_SOCK_DIR fixes\n> > that for PG_REGRESS. But the tap tests don't look at that :(.\n> \n> The tap failures in turn are caused by the Cluster.pm choosing a socket\n> directory with backslashes. Those backslashes are then treated as an escape\n> character both by guc.c and libpq.\n> \n> I think this can be addressed by something like\n> [...]\n\nThat indeed seems to pass [1] where it previously failed [2]. There's another\nfailure that I haven't diagnosed yet, but it's independent of tcp vs unix sockets.\n\n[1] https://cirrus-ci.com/task/5055530235330560?logs=ssl_test#L5\n[2] https://cirrus-ci.com/task/5524596901281792?logs=ssl_test#L5\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Dec 2021 17:22:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:46 AM Andres Freund <andres@anarazel.de> wrote:\n> Is it perhaps time to to use unix sockets on windows by default\n> (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n\nMakes sense. As a data point, it looks like this feature is in all\nsupported releases of Windows. It arrived in 1803, already EOL'd, and\nIIUC even a Windows Server 2016 \"LTSC\" system that's been disconnected\nfrom the internet and refusing all updates reaches \"mainstream EOL\"\nnext month. (Not a Windows person myself, but I've been looking at\nthis stuff while contemplating various filesystem-related changes...\nthere it's a little murkier, you need a WSL1-era kernel *and* you need\nto be running on top of local NTFS, which is harder for us to expect.)\n\nhttps://docs.microsoft.com/en-us/lifecycle/products/windows-10-enterprise-and-education\nhttps://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info\nhttps://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions\n\n\n", "msg_date": "Thu, 9 Dec 2021 15:44:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "On 09.12.21 03:44, Thomas Munro wrote:\n> On Thu, Dec 9, 2021 at 11:46 AM Andres Freund<andres@anarazel.de> wrote:\n>> Is it perhaps time to to use unix sockets on windows by default\n>> (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n\nMakes sense to get this to work, at least as an option.\n\n> Makes sense. As a data point, it looks like this feature is in all\n> supported releases of Windows. It arrived in 1803, already EOL'd, and\n> IIUC even a Windows Server 2016 \"LTSC\" system that's been disconnected\n> from the internet and refusing all updates reaches \"mainstream EOL\"\n> next month.\n\nI believe the \"18\" in \"1803\" refers to 2018. We have Windows buildfarm \nmembers that mention 2016 and 2017 in their title. Would those be in \ntrouble?\n\n\n", "msg_date": "Thu, 9 Dec 2021 14:35:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "\nOn 12/9/21 08:35, Peter Eisentraut wrote:\n> On 09.12.21 03:44, Thomas Munro wrote:\n>> On Thu, Dec 9, 2021 at 11:46 AM Andres Freund<andres@anarazel.de> \n>> wrote:\n>>> Is it perhaps time to to use unix sockets on windows by default\n>>> (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n>\n> Makes sense to get this to work, at least as an option.\n>\n>> Makes sense.  As a data point, it looks like this feature is in all\n>> supported releases of Windows.  It arrived in 1803, already EOL'd, and\n>> IIUC even a Windows Server 2016 \"LTSC\" system that's been disconnected\n>> from the internet and refusing all updates reaches \"mainstream EOL\"\n>> next month.\n>\n> I believe the \"18\" in \"1803\" refers to 2018.  We have Windows\n> buildfarm members that mention 2016 and 2017 in their title.  Would\n> those be in trouble?\n\n\nProbably not if they have been updated. I have Windows machines\nsubstantially older that 2018 but now running versions dated later.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 09:29:12 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "\nOn 12/8/21 20:03, Andres Freund wrote:\n>\n> I wonder if we need a host2unix() helper accompanying perl2host()? Seems nicer\n> than sprinkling s!\\\\!/!g if $PostgreSQL::Test::Utils::windows_os in a growing\n> number of places...\n>\n\nProbably a good idea. I would call it canonical_path or something like\nthat. / works quite happily as a path separator in almost all contexts\non Windows - there are a handful of command line programs that choke on\nit - but sometimes you need to quote the path. WHen I recently provided\nfor cross version upgrade testing on MSVC builds I just quoted\neverything. On Unix/Msys the shell just removes the quotes.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 09:42:19 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "Hi,\n\nOn 2021-12-09 14:35:37 +0100, Peter Eisentraut wrote:\n> On 09.12.21 03:44, Thomas Munro wrote:\n> > On Thu, Dec 9, 2021 at 11:46 AM Andres Freund<andres@anarazel.de> wrote:\n> > > Is it perhaps time to to use unix sockets on windows by default\n> > > (i.e. PG_TEST_USE_UNIX_SOCKETS), at least when on a new enough windows?\n> \n> Makes sense to get this to work, at least as an option.\n\nWith https://github.com/anarazel/postgres/commit/046203741803da863f6129739fd215f8a32ec357\nall tests pass. pg_regress requires PG_REGRESS_SOCK_DIR because it checks for\nTMPDIR, but windows only has TMP and TEMP.\n\n\n> > Makes sense. As a data point, it looks like this feature is in all\n> > supported releases of Windows. It arrived in 1803, already EOL'd, and\n> > IIUC even a Windows Server 2016 \"LTSC\" system that's been disconnected\n> > from the internet and refusing all updates reaches \"mainstream EOL\"\n> > next month.\n> \n> I believe the \"18\" in \"1803\" refers to 2018. We have Windows buildfarm\n> members that mention 2016 and 2017 in their title. Would those be in\n> trouble?\n\nPerhaps it could make sense to print the windows version somewhere as part of\na windows build? Perhaps in the buildfarm client? Seems like it could be\ngenerally useful, outside of this specific issue.\n\nThe most minimal thing would be to just print cmd /c ver or\nsuch. systeminfo.exe output could also be useful, but has a bit of runtime\n(1.5s on my windows VM).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 10:41:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "On 09.12.21 19:41, Andres Freund wrote:\n> Withhttps://github.com/anarazel/postgres/commit/046203741803da863f6129739fd215f8a32ec357\n> all tests pass. pg_regress requires PG_REGRESS_SOCK_DIR because it checks for\n> TMPDIR, but windows only has TMP and TEMP.\n\nThis looks reasonable so far. The commit messages \n8f3ec75de4060d86176ad4ac998eeb87a39748c2 and \n1d53432ff940b789c2431ba476a2a6e2db3edf84 contain some notes about what I \nthought at the time didn't work yet. In particular, the pg_upgrade \ntests don't support the use of Unix sockets on Windows. (Those tests \nhave been rewritten since, so I don't know what the status is.) Also, \nthe comment in pg_regress.c at remove_temp() is still there. These \nthings should probably be addressed before we can consider making this \nthe default.\n\n\n", "msg_date": "Fri, 10 Dec 2021 10:22:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "Hi,\n\nOn 2021-12-10 10:22:13 +0100, Peter Eisentraut wrote:\n> On 09.12.21 19:41, Andres Freund wrote:\n> > Withhttps://github.com/anarazel/postgres/commit/046203741803da863f6129739fd215f8a32ec357\n> > all tests pass. pg_regress requires PG_REGRESS_SOCK_DIR because it checks for\n> > TMPDIR, but windows only has TMP and TEMP.\n> \n> This looks reasonable so far.\n\nI pushed that part, since we clearly need something like them.\n\n\n> The commit messages 8f3ec75de4060d86176ad4ac998eeb87a39748c2 and\n> 1d53432ff940b789c2431ba476a2a6e2db3edf84 contain some notes about what I\n> thought at the time didn't work yet.\n\n> In particular, the pg_upgrade tests\n> don't support the use of Unix sockets on Windows. (Those tests have been\n> rewritten since, so I don't know what the status is.)\n\nISTM we still use two different implementations of the pg_upgrade tests :(. I\nrecall there being some recent-ish work on moving it to be a tap test, but\napparently not yet committed.\n\nIt doesn't look like the vcregress.pl implementation respects\nPG_TEST_USE_UNIX_SOCKETS right now.\n\n\n> pg_regress.c at remove_temp() is still there. These things should probably\n> be addressed before we can consider making this the default.\n\nHm, not immediately obvious what to do about this. Do you know if windows has\nrestrictions around the length of unix domain sockets? If not, I wonder if it\ncould be worth using the data directory as the socket path on windows instead\nof the separate temp directory?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Dec 2021 11:33:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: port conflicts when running tests concurrently on windows." }, { "msg_contents": "On 13.12.21 20:33, Andres Freund wrote:\n>> pg_regress.c at remove_temp() is still there. These things should probably\n>> be addressed before we can consider making this the default.\n> \n> Hm, not immediately obvious what to do about this. Do you know if windows has\n> restrictions around the length of unix domain sockets? If not, I wonder if it\n> could be worth using the data directory as the socket path on windows instead\n> of the separate temp directory?\n\nAccording to src/include/port/win32.h, it's the same 108 or so bytes \nthat everyone else uses. That might not be the kernel limit, however, \njust the way the external API is defined.\n\nAfter the reading the material again, I think that comment might be \noverly cautious. The list of things not to do in a signal handler on \nWindows is not significantly different than those for say Linux, yet we \ndo a lot of them anyway. I'm tempted to just ignore the advice and do \nit anyway, while ignoring errors, which is what it already does.\n\n\n", "msg_date": "Wed, 15 Dec 2021 15:10:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: port conflicts when running tests concurrently on windows." } ]
[ { "msg_contents": "\"Applications using this level must be prepared to retry transactions\ndue to serialization failures.\"\n...\n\"When an application receives this error message, it should abort the\ncurrent transaction and retry the whole transaction from the\nbeginning.\"\n\nI note that the specific error codes this applies to are not\ndocumented, so lets discuss what the docs for that would look like.\n\nI had a conversation with Kevin Grittner about retry some years back\nand it seemed clear that the application should re-execute application\nlogic from the beginning, rather than just slavishly re-execute the\nsame SQL. But that is not documented either.\n\nIs *automatic* retry possible? In all cases? None? Or maybe Some?\n\nISTM that we can't retry anything where a transaction has replied to a\nuser and then the user issued a subsequent SQL statement, since we\nhave no idea whether the subsequent SQL was influenced by the initial\nreply.\n\nBut what about the case of a single statement transaction? Can we just\nre-execute then? I guess if it didn't run anything other than\nIMMUTABLE functions then it should be OK, assuming the inputs\nthemselves were immutable, which we've no way for the user to declare.\nCould we allow a user-defined auto_retry parameter?\n\nWe don't mention that a transaction might just repeatedly fail either.\n\nAnyway, know y'all would have some opinions on this. Happy to document\nwhatever we agree.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 9 Dec 2021 12:42:59 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, Dec 9, 2021 at 7:43 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> I had a conversation with Kevin Grittner about retry some years back\n> and it seemed clear that the application should re-execute application\n> logic from the beginning, rather than just slavishly re-execute the\n> same SQL. But that is not documented either.\n\nYeah, that would be good to mention somehow.\n\n> Is *automatic* retry possible? In all cases? None? Or maybe Some?\n>\n> ISTM that we can't retry anything where a transaction has replied to a\n> user and then the user issued a subsequent SQL statement, since we\n> have no idea whether the subsequent SQL was influenced by the initial\n> reply.\n\nI agree.\n\n> But what about the case of a single statement transaction? Can we just\n> re-execute then? I guess if it didn't run anything other than\n> IMMUTABLE functions then it should be OK, assuming the inputs\n> themselves were immutable, which we've no way for the user to declare.\n> Could we allow a user-defined auto_retry parameter?\n\nI suppose in theory a user-defined parameter is possible, but I think\nit's fundamentally better for this to be managed on the application\nside. Even if the transaction is a single query, we don't know how\nexpensive that query is, and it's at least marginally possible that\nthe user might care about that. For example, if the user has set a\n10-minute timeout someplace, and the query fails after 8 minutes, they\nmay want to retry. But if we retry automatically then they might hit\ntheir timeout, or just be confused about why things are taking so\nlong. And they can always decide not to retry after all, but give up,\nsave it for a less busy period, or whatever.\n\n> We don't mention that a transaction might just repeatedly fail either.\n\nTrue. I think that's another good argument against an auto-retry system.\n\nThe main thing that worries me about an auto-retry system is something\nelse: I think it would rarely be applicable, and people would try to\napply it to situations where it won't actually work properly. I\nbelieve most users who need to retry transactions that fail due to\nserialization problems will need some real application logic to make\nsure that they do the right thing. People with single-statement\ntransactions that can be blindly retried probably aren't using higher\nisolation levels anyway, and probably won't have many failures even if\nthey are. SSI is really for sophisticated applications, and I think\ntrying to make it \"just work\" for people with dumb applications will,\nwell, just not work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 09:37:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Fwiw I think the real problem with automatic retries is that the SQL\ninterface doesn't lend itself to it because the server never really\nknows if the command is going to be followed by a commit or more\ncommands.\n\nI actually think if that problem were tackled it would very likely be\na highly appreciated option. Because I think there's a big overlap\nbetween the set of users interested in higher isolation levels and the\nset of users writing stored procedures defining their business logic.\nThey're both kind of \"traditional\" SQL engine approaches and both lend\nthemselves to the environment where you have a lot of programmers\nworking on a system and you're not able to do things like define\nstrict locking and update orderings.\n\nSo a lot of users are probably looking at something like \"BEGIN;\nSELECT create_customer_order(....); COMMIT\" and wondering why the\nserver can't handle automatically retrying the query if they get an\nisolation failure.\n\nThere are actually other reasons why providing the whole logic for the\ntransaction up front with a promise that it'll be the whole\ntransaction is attractive. E.g. vacuum could ignore a transaction if\nit knows the transaction will never look at the table it's\nprocessing... Or automatic deadlock testing tools could extract the\nlist of tables being accessed and suggest \"lock table\" commands to put\nat the head of the transaction sorted in a canonical order.\n\nThese things may not be easy but they're currently impossible for the\nsame reasons automatically retrying is. The executor doesn't know what\nsubsequent commands will be coming after the current one and doesn't\nknow whether it has the whole transaction.\n\n\n", "msg_date": "Thu, 16 Dec 2021 01:05:07 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Fri, Dec 10, 2021 at 1:43 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> \"Applications using this level must be prepared to retry transactions\n> due to serialization failures.\"\n> ...\n> \"When an application receives this error message, it should abort the\n> current transaction and retry the whole transaction from the\n> beginning.\"\n>\n> I note that the specific error codes this applies to are not\n> documented, so lets discuss what the docs for that would look like.\n\n+1 for naming the error.\n\n> I had a conversation with Kevin Grittner about retry some years back\n> and it seemed clear that the application should re-execute application\n> logic from the beginning, rather than just slavishly re-execute the\n> same SQL. But that is not documented either.\n\nRight, the result of the first statement could cause the application\nto do something completely different the second time through. I\npersonally think the best way for applications to deal with this\nproblem (and at least also deadlock, serialisation failure's\npessimistic cousin) is to represent transactions as blocks of code\nthat can be automatically retried, however that looks in your client\nlanguage. It might be that you pass a\nfunction/closure/whatever-you-call-it to the transaction management\ncode so it can rerun it if necessary, or that a function is decorated\nin some way that some magic infrastructure understands, but that's a\nlittle tricky to write about in a general enough way for our manual.\n(A survey of how this looks with various different libraries and tools\nmight make a neat conference talk though.) But isn't that exactly\nwhat that existing sentence \"... from the beginning\" is trying to say,\nespecially with the follow sentence (\"The second time through...\")?\nHhm, yeah, perhaps that next sentence could be clearer.\n\n> Is *automatic* retry possible? In all cases? None? Or maybe Some?\n\nI'm aware of a couple of concrete cases that confound attempts to\nretry automatically: sometimes we report a unique constraint\nviolation or an exclusion constraint failure, when we have the\ninformation required to diagnose a serialisation anomaly. In those\ncases, we really should figure out how to spit out 40001 (otherwise\nwhat is general purpose auto retry code supposed to do with UCV?). We\nfixed a single-index variant of this problem in commit fcff8a57. I\nhave an idea for how this might be fixed for the multi-index UCV[1]\nand exclusion constraint[2] variants of the problem, but haven't\nactually tried yet.\n\nIf there are other things that stand in the way of reliable automated\nretry (= a list of error codes a client library could look for) then\nI'd love to have a list of them.\n\n> But what about the case of a single statement transaction? Can we just\n> re-execute then? I guess if it didn't run anything other than\n> IMMUTABLE functions then it should be OK, assuming the inputs\n> themselves were immutable, which we've no way for the user to declare.\n> Could we allow a user-defined auto_retry parameter?\n\nI've wondered about that too, but so far it didn't seem worth the\neffort, since application developers need another solution for\nmulti-statement retry anyway.\n\n> We don't mention that a transaction might just repeatedly fail either.\n\nAccording to the VLDB paper, the \"safe retry\" property (§ 5.4) means\nthat a retry won't abort for the same reason (due to a cycle with the\nsame set of other transactions as your last attempt), unless prepared\ntransactions are involved (§ 7.1). This means that the whole system\ncontinues to make some kind of progress in the absence of 2PC, though\nof course your transaction might or might not fail because of a cycle\nwith some other set of transactions. Maybe that is too technical for\nour manual, which already provides the link to that paper, but it's\ninteresting to note that you can suffer from a stuck busy-work loop\nuntil conflicting prepared xacts go away, with a naive\nautomatic-retry-forever system.\n\n[1] https://www.postgresql.org/message-id/flat/CAGPCyEZG76zjv7S31v_xPeLNRuzj-m%3DY2GOY7PEzu7vhB%3DyQog%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAMTXbE-sq9JoihvG-ccC70jpjMr%2BDWmnYUj%2BVdnFRFSRuaaLZQ%40mail.gmail.com\n\n\n", "msg_date": "Wed, 29 Dec 2021 16:29:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Wed, 29 Dec 2021 at 03:30, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Dec 10, 2021 at 1:43 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > \"Applications using this level must be prepared to retry transactions\n> > due to serialization failures.\"\n> > ...\n> > \"When an application receives this error message, it should abort the\n> > current transaction and retry the whole transaction from the\n> > beginning.\"\n> >\n> > I note that the specific error codes this applies to are not\n> > documented, so lets discuss what the docs for that would look like.\n>\n> +1 for naming the error.\n\nI've tried to sum up the various points from everybody into this doc\npatch. Thanks all for replies.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 29 Dec 2021 12:39:39 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, 16 Dec 2021 at 06:05, Greg Stark <stark@mit.edu> wrote:\n\n> So a lot of users are probably looking at something like \"BEGIN;\n> SELECT create_customer_order(....); COMMIT\" and wondering why the\n> server can't handle automatically retrying the query if they get an\n> isolation failure.\n\nI agree with you that it would be desirable to retry for the simple\ncase of an autocommit/single statement transaction run with\ndefault_transaction_isolation = 'serializability'.\n\nThe most important question before we take further action is whether\nthis would be correct to do so, in all cases.\n\nSome problem cases would help us decide either way.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 4 Jan 2022 11:49:28 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> I've tried to sum up the various points from everybody into this doc\n> patch. Thanks all for replies.\n\nThis seemed rather badly in need of copy-editing. How do you\nlike the attached text?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 23 Mar 2022 15:50:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Wed, 23 Mar 2022 at 19:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > I've tried to sum up the various points from everybody into this doc\n> > patch. Thanks all for replies.\n>\n> This seemed rather badly in need of copy-editing. How do you\n> like the attached text?\n\nSeems clear and does the job.\n\nThe unique violation thing is worryingly general. Do we know enough to\nsay that this is thought to occur only with a) multiple unique\nconstraints, b) exclusion constraints?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 10:43:54 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, Mar 24, 2022 at 11:44 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> The unique violation thing is worryingly general. Do we know enough to\n> say that this is thought to occur only with a) multiple unique\n> constraints, b) exclusion constraints?\n\nI'm aware of 3 cases. The two you mentioned, which I think we can fix\n(as described in the threads I posted upthread), and then there is a\nthird case that I'm still confused about, in the last line of\nread-write-unique-4.spec.\n\n\n", "msg_date": "Fri, 25 Mar 2022 00:01:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, 24 Mar 2022 at 11:01, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Mar 24, 2022 at 11:44 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > The unique violation thing is worryingly general. Do we know enough to\n> > say that this is thought to occur only with a) multiple unique\n> > constraints, b) exclusion constraints?\n>\n> I'm aware of 3 cases. The two you mentioned, which I think we can fix\n> (as described in the threads I posted upthread), and then there is a\n> third case that I'm still confused about, in the last line of\n> read-write-unique-4.spec.\n\nI don't see any confusion - it is clearly a serialization error. What\nis more, I see this as a confusing bug that we should fix.\n\nIf we were updating the row rather than inserting it, we would get\n\"ERROR: could not serialize access due to concurrent update\", as\ndocumented. The type of command shouldn't affect whether it is a\nserialization error or not. (Attached patch proves it does throw\nserializable error for UPDATE).\n\nSolving this requires us to alter the Index API to pass down a\nsnapshot to allow us to test whether the concurrent insert is visible\nor not. The test is shown in the attached patch, but this doesn't\nattempt the major task of tweaking the APIs to allow this check to be\nmade.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Thu, 24 Mar 2022 12:12:56 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Mar 24, 2022 at 11:44 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n>> The unique violation thing is worryingly general. Do we know enough to\n>> say that this is thought to occur only with a) multiple unique\n>> constraints, b) exclusion constraints?\n\n> I'm aware of 3 cases. The two you mentioned, which I think we can fix\n> (as described in the threads I posted upthread), and then there is a\n> third case that I'm still confused about, in the last line of\n> read-write-unique-4.spec.\n\nThat test is modeling the case where the application does an INSERT\nwith values based on some data it read earlier. There is no way for\nthe server to know that there's any connection, so I think if you\ntry to throw a serialization error rather than a uniqueness error,\nyou're basically lying to the client by claiming something you do not\nknow to be true. And the lie is not without consequences: if the\napplication believes it, it might iterate forever vainly trying to\ncommit a transaction that will never succeed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 10:05:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, 24 Mar 2022 at 14:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Mar 24, 2022 at 11:44 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> >> The unique violation thing is worryingly general. Do we know enough to\n> >> say that this is thought to occur only with a) multiple unique\n> >> constraints, b) exclusion constraints?\n>\n> > I'm aware of 3 cases. The two you mentioned, which I think we can fix\n> > (as described in the threads I posted upthread), and then there is a\n> > third case that I'm still confused about, in the last line of\n> > read-write-unique-4.spec.\n>\n> That test is modeling the case where the application does an INSERT\n> with values based on some data it read earlier. There is no way for\n> the server to know that there's any connection, so I think if you\n> try to throw a serialization error rather than a uniqueness error,\n> you're basically lying to the client by claiming something you do not\n> know to be true. And the lie is not without consequences: if the\n> application believes it, it might iterate forever vainly trying to\n> commit a transaction that will never succeed.\n\nOK, I see what you mean. There are 2 types of transaction, one that\nreads inside the transaction, one that decides what value to use some\nother way.\n\nSo now we have 2 cases, both of which generate uniqueness violations,\nbut only one of which might succeed if retried. The patch does cover\nthis, I guess, by saying be careful, but I would be happier if we can\nalso add\n\n\"this is thought to occur only with multiple unique constraints and/or\nan exclusion constraints\"\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 14:28:51 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> OK, I see what you mean. There are 2 types of transaction, one that\n> reads inside the transaction, one that decides what value to use some\n> other way.\n\n> So now we have 2 cases, both of which generate uniqueness violations,\n> but only one of which might succeed if retried. The patch does cover\n> this, I guess, by saying be careful, but I would be happier if we can\n> also add\n\n> \"this is thought to occur only with multiple unique constraints and/or\n> an exclusion constraints\"\n\nUm, what's that got to do with it? The example in \nread-write-unique-4.spec involves only a single pkey constraint.\n\nWe could add something trying to explain that if the application inserts a\nvalue into a constrained column based on data it read earlier, then any\nresulting constraint violation might be effectively a serialization\nfailure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 10:56:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, 24 Mar 2022 at 14:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > OK, I see what you mean. There are 2 types of transaction, one that\n> > reads inside the transaction, one that decides what value to use some\n> > other way.\n>\n> > So now we have 2 cases, both of which generate uniqueness violations,\n> > but only one of which might succeed if retried. The patch does cover\n> > this, I guess, by saying be careful, but I would be happier if we can\n> > also add\n>\n> > \"this is thought to occur only with multiple unique constraints and/or\n> > an exclusion constraints\"\n>\n> Um, what's that got to do with it? The example in\n> read-write-unique-4.spec involves only a single pkey constraint.\n\nYes, but as you explained, its not actually a serializable case, it\njust looks a bit like one.\n\nThat means we are not currently aware of any case where the situation\nis serializable but the error message is uniqueness violation, unless\nwe have 2 or more unique constraints and/or an exclusion constraint.\n\n> We could add something trying to explain that if the application inserts a\n> value into a constrained column based on data it read earlier, then any\n> resulting constraint violation might be effectively a serialization\n> failure.\n\nWe could do that as well.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 15:43:25 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Thu, 24 Mar 2022 at 14:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Um, what's that got to do with it? The example in\n>> read-write-unique-4.spec involves only a single pkey constraint.\n\n> Yes, but as you explained, its not actually a serializable case, it\n> just looks a bit like one.\n\n> That means we are not currently aware of any case where the situation\n> is serializable but the error message is uniqueness violation, unless\n> we have 2 or more unique constraints and/or an exclusion constraint.\n\nMeh. I'm disinclined to document it at that level of detail, both\nbecause it's subject to change and because we're not sure that that\nlist is exhaustive. I think a bit of handwaving is preferable.\nHow about the attached? (Only the third new para is different.)\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 24 Mar 2022 12:29:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "On Thu, 24 Mar 2022 at 16:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Thu, 24 Mar 2022 at 14:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Um, what's that got to do with it? The example in\n> >> read-write-unique-4.spec involves only a single pkey constraint.\n>\n> > Yes, but as you explained, its not actually a serializable case, it\n> > just looks a bit like one.\n>\n> > That means we are not currently aware of any case where the situation\n> > is serializable but the error message is uniqueness violation, unless\n> > we have 2 or more unique constraints and/or an exclusion constraint.\n>\n> Meh. I'm disinclined to document it at that level of detail, both\n> because it's subject to change and because we're not sure that that\n> list is exhaustive. I think a bit of handwaving is preferable.\n> How about the attached? (Only the third new para is different.)\n\nIt's much better, thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Mar 2022 16:37:52 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Documenting when to retry on serialization failure" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Thu, 24 Mar 2022 at 16:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> How about the attached? (Only the third new para is different.)\n\n> It's much better, thanks.\n\nPushed then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:35:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documenting when to retry on serialization failure" } ]
[ { "msg_contents": "When a user must shut down and restart in single-user mode to run\nvacuum on an entire database, that does a lot of work that's\nunnecessary for getting the system online again, even without\nindex_cleanup. We had a recent case where a single-user vacuum took\naround 3 days to complete.\n\nNow that we have a concept of a fail-safe vacuum, maybe it would be\nbeneficial to skip a vacuum in single-user mode if the fail-safe\ncriteria were not met at the beginning of vacuuming a relation. This\nis not without risk, of course, but it should be much faster than\ntoday and once up and running the admin would have a chance to get a\nhandle on things. Thoughts?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 15:28:18 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "do only critical work during single-user vacuum?" }, { "msg_contents": "On 12/9/21, 11:34 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> When a user must shut down and restart in single-user mode to run\r\n> vacuum on an entire database, that does a lot of work that's\r\n> unnecessary for getting the system online again, even without\r\n> index_cleanup. We had a recent case where a single-user vacuum took\r\n> around 3 days to complete.\r\n>\r\n> Now that we have a concept of a fail-safe vacuum, maybe it would be\r\n> beneficial to skip a vacuum in single-user mode if the fail-safe\r\n> criteria were not met at the beginning of vacuuming a relation. This\r\n> is not without risk, of course, but it should be much faster than\r\n> today and once up and running the admin would have a chance to get a\r\n> handle on things. Thoughts?\r\n\r\nWould the --min-xid-age and --no-index-cleanup vacuumdb options help\r\nwith this?\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 9 Dec 2021 20:32:53 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 11:28 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Now that we have a concept of a fail-safe vacuum, maybe it would be\n> beneficial to skip a vacuum in single-user mode if the fail-safe\n> criteria were not met at the beginning of vacuuming a relation.\n\nObviously the main goal of the failsafe is to not get into this\nsituation in the first place. But it's still very reasonable to ask\n\"what happens when the failsafe even fails at that?\". This was\nsomething that we considered directly when working on the feature.\n\nThere is a precheck that takes place before any other work, which\nensures that we won't even start off any of the nonessential tasks the\nfailsafe skips (e.g., index vacuuming). The precheck works like any\nother check -- it checks if relfrozenxid is dangerously old. (We won't\neven bother trying to launch parallel workers when this precheck\ntriggers, which is another reason to have it that Mashahiko pointed\nout during development.)\n\nPresumably there is no need to specifically check if we're running in\nsingle user mode when considering if we need to trigger the failsafe\n-- which, as you say, we won't do. It shouldn't matter, because\nanybody running single-user mode just to VACUUM must already be unable\nto allocate new XIDs outside of single user mode. That condition alone\nwill trigger the failsafe.\n\nThat said, it would be very easy to add a check for single user mode.\nIt didn't happen because we weren't aware of any specific need for it.\nPerhaps there is an argument for it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Dec 2021 13:04:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 1:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Dec 9, 2021 at 11:28 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > Now that we have a concept of a fail-safe vacuum, maybe it would be\n> > beneficial to skip a vacuum in single-user mode if the fail-safe\n> > criteria were not met at the beginning of vacuuming a relation.\n>\n> Obviously the main goal of the failsafe is to not get into this\n> situation in the first place. But it's still very reasonable to ask\n> \"what happens when the failsafe even fails at that?\". This was\n> something that we considered directly when working on the feature.\n\nOh, I think I misunderstood. Your concern is for the case where the\nDBA runs a simple \"VACUUM\" in single-user mode; you want to skip over\ntables that don't really need to advance relfrozenxid, automatically.\n\nI can see an argument for something like that, but I think that it\nshould be a variant of VACUUM. Or maybe it could be addressed with a\nbetter user interface; single-user mode should prompt the user about\nwhat exact VACUUM command they ought to run to get things going.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Dec 2021 13:12:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2021-12-09 15:28:18 -0400, John Naylor wrote:\n> When a user must shut down and restart in single-user mode to run\n> vacuum on an entire database, that does a lot of work that's\n> unnecessary for getting the system online again, even without\n> index_cleanup. We had a recent case where a single-user vacuum took\n> around 3 days to complete.\n> \n> Now that we have a concept of a fail-safe vacuum, maybe it would be\n> beneficial to skip a vacuum in single-user mode if the fail-safe\n> criteria were not met at the beginning of vacuuming a relation. This\n> is not without risk, of course, but it should be much faster than\n> today and once up and running the admin would have a chance to get a\n> handle on things. Thoughts?\n\nWhat if the user tried to reclaim space by vacuuming (via truncation)? Or is\nworking around some corruption or such? I think this is too much magic.\n\nThat said, having a VACUUM \"selector\" that selects the oldest tables could be\nquite useful. And address this usecase both for single-user and normal\noperation.\n\nAnother thing that might be worth doing is to update relfrozenxid earlier. We\ndefinitely should update it before doing truncation (that can be quite\nexpensive). But we probably should do it even before the final\nlazy_cleanup_all_indexes() pass - often that'll be the only pass, and there's\nreally no reason to delay relfrozenxid advancement till after that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 14:08:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 5:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Oh, I think I misunderstood. Your concern is for the case where the\n> DBA runs a simple \"VACUUM\" in single-user mode; you want to skip over\n> tables that don't really need to advance relfrozenxid, automatically.\n\nRight.\n\n> I can see an argument for something like that, but I think that it\n> should be a variant of VACUUM. Or maybe it could be addressed with a\n> better user interface;\n\nOn Thu, Dec 9, 2021 at 6:08 PM Andres Freund <andres@anarazel.de> wrote:\n> What if the user tried to reclaim space by vacuuming (via truncation)? Or is\n> working around some corruption or such? I think this is too much magic.\n>\n> That said, having a VACUUM \"selector\" that selects the oldest tables could be\n> quite useful. And address this usecase both for single-user and normal\n> operation.\n\nAll good points.\n\n[Peter again]\n> single-user mode should prompt the user about\n> what exact VACUUM command they ought to run to get things going.\n\nThe current message is particularly bad in its vagueness because some\nusers immediately reach for VACUUM FULL, which quite logically seems\nlike the most complete thing to do.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Dec 2021 19:53:40 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 3:53 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> > single-user mode should prompt the user about\n> > what exact VACUUM command they ought to run to get things going.\n>\n> The current message is particularly bad in its vagueness because some\n> users immediately reach for VACUUM FULL, which quite logically seems\n> like the most complete thing to do.\n\nYou mean the GetNewTransactionId() error, about single-user mode? Why\ndo we need to use single-user mode at all? I'm pretty sure that the\nreason is \"as an escape hatch\", but I wonder what that really means.\n\n***Thinks***\n\nI suppose that it might be a good idea to make sure that autovacuum\ncannot run, because in general autovacuum might need to allocate an\nXID (for autoanalyze), and locking all that down in exactly the right\nway might not be a very good use of our time.\n\nBut even still, why not have some variant of single-user mode just for\nthis task? Something that's easy to use when the DBA is rudely\nawakened at 4am -- something a little bit like a big red button that\nfixes the exact problem of XID exhaustion, in a reasonably targeted\nway? I don't think that this needs to involve the VACUUM command\nitself.\n\nThe current recommendation to do a whole-database VACUUM doesn't take\na position on how old the oldest datfrozenxid has to be in order to\nbecome safe again, preferring to \"make a conservative recommendation\"\n-- which is what a database-level VACUUM really is. But that doesn't\nseem helpful at all. In fact, it's not even conservative. We could\neasily come up with a reasonable definition of \"datfrozenxid that's\nsufficiently new to make it safe to come back online and allocate XIDs\nagain\". Perhaps something based on the current\nautovacuum_freeze_max_age (and autovacuum_multixact_freeze_max_age)\nsettings, with sanity checks.\n\nWe could then apply this criteria in new code that implements this\n\"big red button\" (maybe this is a new option for the postgres\nexecutable, a little like --single?). Something that's reasonably\ntargeted, and dead simple to use.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Dec 2021 16:34:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 12/9/21, 12:33 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/9/21, 11:34 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n>> Now that we have a concept of a fail-safe vacuum, maybe it would be\r\n>> beneficial to skip a vacuum in single-user mode if the fail-safe\r\n>> criteria were not met at the beginning of vacuuming a relation. This\r\n>> is not without risk, of course, but it should be much faster than\r\n>> today and once up and running the admin would have a chance to get a\r\n>> handle on things. Thoughts?\r\n>\r\n> Would the --min-xid-age and --no-index-cleanup vacuumdb options help\r\n> with this?\r\n\r\nSorry, I'm not sure what I was thinking. Of coure you cannot use\r\nvacuumdb in single-user mode. But I think something like\r\n--min-xid-age in VACUUM is what you are looking for.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Dec 2021 01:05:42 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 12/9/21, 4:36 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\r\n> We could then apply this criteria in new code that implements this\r\n> \"big red button\" (maybe this is a new option for the postgres\r\n> executable, a little like --single?). Something that's reasonably\r\n> targeted, and dead simple to use.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Dec 2021 01:06:33 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 12/9/21, 5:06 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/9/21, 4:36 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\r\n>> We could then apply this criteria in new code that implements this\r\n>> \"big red button\" (maybe this is a new option for the postgres\r\n>> executable, a little like --single?). Something that's reasonably\r\n>> targeted, and dead simple to use.\r\n>\r\n> +1\r\n\r\nAs Andres noted, such a feature might be useful during normal\r\noperation, too. Perhaps the vacuumdb --min-xid-age stuff should be\r\nmoved to a new VACUUM option.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Dec 2021 01:11:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 5:12 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> As Andres noted, such a feature might be useful during normal\n> operation, too. Perhaps the vacuumdb --min-xid-age stuff should be\n> moved to a new VACUUM option.\n\nI was thinking of something like pg_import_system_collations() for\nthis: a function that's built-in, and can be called in single user\nmode, that nevertheless doesn't make any assumptions about how it may\nbe called. Nothing stops a superuser from calling\npg_import_system_collations() themselves, outside of initdb. That\nisn't particularly common, but it works in the way you'd expect it to\nwork. It's easy to test.\n\nI imagine that this new function (to handle maintenance tasks in the\nevent of a wraparound emergency) would output information about its\nprogress. For example, it would make an up-front decision about which\ntables needed to be vacuumed in order for the current DB's\ndatfrozenxid to be sufficiently new, before it started anything (with\nhandling for edge-cases with many tables, perhaps). It might also show\nthe size of each table, and show another line for each table that has\nbeen processed so far, as a rudimentary progress indicator.\n\nWe could still have a separate option for the postgres executable,\njust to invoke single-user mode and call this function. It would\nmostly just be window dressing, of course, but that still seems\nuseful.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Dec 2021 17:25:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2021-12-09 16:34:53 -0800, Peter Geoghegan wrote:\n> But even still, why not have some variant of single-user mode just for\n> this task?\n\n> Something that's easy to use when the DBA is rudely\n> awakened at 4am -- something a little bit like a big red button that\n> fixes the exact problem of XID exhaustion, in a reasonably targeted\n> way? I don't think that this needs to involve the VACUUM command\n> itself.\n\nI think we should move *away* from single user mode, rather than the\nopposite. It's a substantial code burden and it's hard to use.\n\nI don't think single user mode is a good fit for this anyway - it's inherently\nfocussed on connecting to a single database. But wraparound issues often\ninvolve more than one database (often just because of shared catalogs).\n\n\nAlso, requiring a restart will often exascerbate the problem - the cache will\nbe cold, there's no walwriter, etc, making the vacuum slower. Making vacuum\nnot consume an xid seems like a lot more promising - and quite doable. Then\nthere's no need to restart at all.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 17:56:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 5:56 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we should move *away* from single user mode, rather than the\n> opposite. It's a substantial code burden and it's hard to use.\n\nI wouldn't say that this is moving closer to single user mode.\n\n> I don't think single user mode is a good fit for this anyway - it's inherently\n> focussed on connecting to a single database. But wraparound issues often\n> involve more than one database (often just because of shared catalogs).\n\nI don't disagree with any of that. My suggestions were based on the\nassumption that it might be unrealistic to expect somebody to spend a\nhuge amount of time on this, given that (in a certain sense) it's\nnever really supposed to be used. Even a very simple approach would be\na big improvement.\n\n> Also, requiring a restart will often exascerbate the problem - the cache will\n> be cold, there's no walwriter, etc, making the vacuum slower. Making vacuum\n> not consume an xid seems like a lot more promising - and quite doable. Then\n> there's no need to restart at all.\n\nI didn't give too much consideration to what it would take to keep the\nsystem partially online, without introducing excessive complexity.\nMaybe it wouldn't be that hard to teach the system to stop allocating\nXIDs, while still allowing autovacuum workers to continue to get the\nsystem functioning again. With the av workers taking a particular\nemphasis on doing whatever work is required for the system to be able\nto allocate XIDs again -- but not too much more (not until things are\nback to normal). Now the plan is starting to get ambitious relative to\nhow often it'll be seen by users, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Dec 2021 18:32:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 12/9/21, 5:27 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\r\n> I imagine that this new function (to handle maintenance tasks in the\r\n> event of a wraparound emergency) would output information about its\r\n> progress. For example, it would make an up-front decision about which\r\n> tables needed to be vacuumed in order for the current DB's\r\n> datfrozenxid to be sufficiently new, before it started anything (with\r\n> handling for edge-cases with many tables, perhaps). It might also show\r\n> the size of each table, and show another line for each table that has\r\n> been processed so far, as a rudimentary progress indicator.\r\n\r\nI like the idea of having a built-in function that does the bare\r\nminimum to resolve wraparound emergencies, and I think providing some\r\nsort of simple progress indicator (even if rudimentary) would be very\r\nuseful. I imagine the decision logic could be pretty simple. If\r\nwe're only interested in getting the cluster out of a wraparound\r\nemergency, we can probably just look for all tables with an age over\r\n~2B.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Dec 2021 04:41:07 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 8:41 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I like the idea of having a built-in function that does the bare\n> minimum to resolve wraparound emergencies, and I think providing some\n> sort of simple progress indicator (even if rudimentary) would be very\n> useful.\n\nIf John doesn't have time to work on this during the Postgres 15\ncycle, and if nobody else picks it up, then we should at least do the\nbare minimum here: force the use of the failsafe in single user mode\n(regardless of the age of relfrozenxid/relminmxid, which in general\nmight not be that old in tables where VACUUM might need to do a lot of\nwork). Attached quick and dirty patch shows what this would take. If\nnothing else, it seems natural to define running any VACUUM in single\nuser mode as an emergency.\n\nThis is really the least we could do -- it's much better than nothing,\nbut still really lazy^M^M^M^M conservative. I haven't revised the\nassumption that the user should do a top-level \"VACUUM\" in databases\nthat can no longer allocate XIDs due to wraparound, despite the fact\nthat we could do far better with moderate effort. Although it might\nmake sense to commit something like the attached as part of a more\nworked out solution (assuming it didn't fully remove single user mode\nfrom the equation, which would be better still).\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 20 Dec 2021 17:17:26 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2021-12-20 17:17:26 -0800, Peter Geoghegan wrote:\n> On Thu, Dec 9, 2021 at 8:41 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > I like the idea of having a built-in function that does the bare\n> > minimum to resolve wraparound emergencies, and I think providing some\n> > sort of simple progress indicator (even if rudimentary) would be very\n> > useful.\n> \n> If John doesn't have time to work on this during the Postgres 15\n> cycle, and if nobody else picks it up, then we should at least do the\n> bare minimum here: force the use of the failsafe in single user mode\n> (regardless of the age of relfrozenxid/relminmxid, which in general\n> might not be that old in tables where VACUUM might need to do a lot of\n> work). Attached quick and dirty patch shows what this would take. If\n> nothing else, it seems natural to define running any VACUUM in single\n> user mode as an emergency.\n\nAs I said before I think this is a bad idea. I'm fine with adding a vacuum\nparameter forcing failsafe mode. And perhaps a hint to suggest it in single\nuser mode. But forcing it is a bad idea - single user isn't just used for\nemergencies (c.f. initdb, which this patch would regress) and not every\nemergency making single user mode useful is related to wraparound.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Dec 2021 19:46:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-20 17:17:26 -0800, Peter Geoghegan wrote:\n> > On Thu, Dec 9, 2021 at 8:41 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > > I like the idea of having a built-in function that does the bare\n> > > minimum to resolve wraparound emergencies, and I think providing some\n> > > sort of simple progress indicator (even if rudimentary) would be very\n> > > useful.\n> >\n> > If John doesn't have time to work on this during the Postgres 15\n> > cycle, and if nobody else picks it up, then we should at least do the\n> > bare minimum here: force the use of the failsafe in single user mode\n> > (regardless of the age of relfrozenxid/relminmxid, which in general\n> > might not be that old in tables where VACUUM might need to do a lot of\n> > work). Attached quick and dirty patch shows what this would take. If\n> > nothing else, it seems natural to define running any VACUUM in single\n> > user mode as an emergency.\n>\n> As I said before I think this is a bad idea. I'm fine with adding a vacuum\n> parameter forcing failsafe mode. And perhaps a hint to suggest it in single\n> user mode. But forcing it is a bad idea - single user isn't just used for\n> emergencies (c.f. initdb, which this patch would regress) and not every\n> emergency making single user mode useful is related to wraparound.\n\n+1\n\nBTW a vacuum automatically enters failsafe mode under the situation\nwhere the user has to run a vacuum in the single-user mode, right?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 21 Dec 2021 13:40:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Mon, Dec 20, 2021 at 8:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> BTW a vacuum automatically enters failsafe mode under the situation\n> where the user has to run a vacuum in the single-user mode, right?\n\nOnly for the table that had the problem. Maybe there are no other\ntables that a database level \"VACUUM\" will need to spend much time on,\nor maybe there are, and they will make it take much much longer (it\nall depends).\n\nThe goal of the patch is to make sure that when we're in single user\nmode, we'll consistently trigger the failsafe, for every VACUUM\nagainst every table -- not just the table (or tables) whose\nrelfrozenxid is very old. That's still naive, but much less naive than\nsimply telling users to VACUUM the whole database in single user mode\nwhile vacuuming indexes, etc.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Dec 2021 20:52:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 1:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Dec 20, 2021 at 8:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > BTW a vacuum automatically enters failsafe mode under the situation\n> > where the user has to run a vacuum in the single-user mode, right?\n>\n> Only for the table that had the problem. Maybe there are no other\n> tables that a database level \"VACUUM\" will need to spend much time on,\n> or maybe there are, and they will make it take much much longer (it\n> all depends).\n>\n> The goal of the patch is to make sure that when we're in single user\n> mode, we'll consistently trigger the failsafe, for every VACUUM\n> against every table -- not just the table (or tables) whose\n> relfrozenxid is very old. That's still naive, but much less naive than\n> simply telling users to VACUUM the whole database in single user mode\n> while vacuuming indexes, etc.\n\nI understand the patch, thank you for the explanation!\n\nI remember Simon proposed a VACUUM command option[1], called\nFAST_FREEZE, to turn off index cleanup and heap truncation. Now that\nwe have failsafe mechanism probably we can have a VACUUM command\noption to turn on failsafe mode instead.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/32/2908/\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 21 Dec 2021 16:56:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 3:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 21, 2021 at 1:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Mon, Dec 20, 2021 at 8:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > BTW a vacuum automatically enters failsafe mode under the situation\n> > > where the user has to run a vacuum in the single-user mode, right?\n> >\n> > Only for the table that had the problem. Maybe there are no other\n> > tables that a database level \"VACUUM\" will need to spend much time on,\n> > or maybe there are, and they will make it take much much longer (it\n> > all depends).\n> >\n> > The goal of the patch is to make sure that when we're in single user\n> > mode, we'll consistently trigger the failsafe, for every VACUUM\n> > against every table -- not just the table (or tables) whose\n> > relfrozenxid is very old. That's still naive, but much less naive than\n> > simply telling users to VACUUM the whole database in single user mode\n> > while vacuuming indexes, etc.\n>\n> I understand the patch, thank you for the explanation!\n>\n> I remember Simon proposed a VACUUM command option[1], called\n> FAST_FREEZE, to turn off index cleanup and heap truncation. Now that\n> we have failsafe mechanism probably we can have a VACUUM command\n> option to turn on failsafe mode instead.\n\nI've been thinking a bit more about this, and I see two desirable\ngoals of anti-wraparound vacuum in single-user mode:\n\n1. Get out of single-user mode as quickly as possible.\n\n2. Minimize the catch-up work we have to do once we're out.\n\nCurrently, a naive vacuum does as much work as possible and leaves a\nbunch of WAL streaming and archiving work for later, so that much is\neasy to improve upon and we don't have to be terribly sophisticated.\nKeeping in mind Andres' point that we don't want to force possibly\nunwanted behavior just because we're in single-user mode, it makes\nsense to have some kind of option that has the above two goals.\nInstead of a boolean, it seems like the new option should specify some\nage below which VACUUM will skip the table entirely, and above which\nwill enter fail-safe mode. As mentioned earlier, the shutdown hint\ncould spell out the exact command. With this design, it would specify\nthe fail-safe default, or something else, to use with the option. That\nseems doable for v15 -- any thoughts on that approach?\n\nIn standard operation, the above goals could be restated as \"advance\nxmin as quickly as possible\" and \"generate as little future\n'work/debt' as possible, whether dirty pages or WAL\". There are some\nmore sophisticated things we can do in this regard, but something like\nthe above could also be useful in normal operation. In fact, that\n\"normal\" could be just after we restarted after doing the bare-minimum\nin single-user mode, and want to continue freezing and keep things\nunder control.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 13:36:20 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "I wrote:\n\n> Instead of a boolean, it seems like the new option should specify some\n> age below which VACUUM will skip the table entirely, and above which\n> will enter fail-safe mode. As mentioned earlier, the shutdown hint\n> could spell out the exact command. With this design, it would specify\n> the fail-safe default, or something else, to use with the option.\n\nOn second thought, we don't really need another number here. We could\nsimply go by the existing failsafe parameter, and if the admin wants a\ndifferent value, it's already possible to specify\nvacuum_failsafe_age/vacuum_multixact_failsafe_age in a session,\nincluding in single-user mode. Perhaps a new boolean called\nFAILSAFE_ONLY. If no table is specified, then when generating the list\nof tables, include only those with relfrozenxid/relminmxid greater\nthan their failsafe thresholds.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 17:35:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 1:31 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On second thought, we don't really need another number here. We could\n> simply go by the existing failsafe parameter, and if the admin wants a\n> different value, it's already possible to specify\n> vacuum_failsafe_age/vacuum_multixact_failsafe_age in a session,\n> including in single-user mode. Perhaps a new boolean called\n> FAILSAFE_ONLY. If no table is specified, then when generating the list\n> of tables, include only those with relfrozenxid/relminmxid greater\n> than their failsafe thresholds.\n\nThat's equivalent to the quick and dirty patch I wrote (assuming that\nthe user actually uses this new FAILSAFE_ONLY thing).\n\nBut if we're going to add a new option to the VACUUM command (or\nsomething of similar scope), then we might as well add a new behavior\nthat is reasonably exact -- something that (say) only *starts* a\nVACUUM for those tables whose relfrozenxid age currently exceeds half\nthe autovacuum_freeze_max_age for the table (usually taken from the\nGUC, sometimes taken from the reloption), which also forces the\nfailsafe. And with similar handling for\nrelminmxid/autovacuum_multixact_freeze_max_age.\n\nIn other words, while triggering the failsafe is important, simply *not\nstarting* VACUUM for relations where there is really no need for it is\nat least as important. We shouldn't even think about pruning or\nfreezing with these tables. (ISTM that the only thing that might be a\nbit controversial about any of this is my definition of \"safe\", which\nseems like about the right trade-off to me.)\n\nThis new command/facility should probably not be a new flag to the\nVACUUM command, as such. Rather, I think that it should either be an\nSQL-callable function, or a dedicated top-level command (that doesn't\naccept any tables). The only reason to have this is for scenarios\nwhere the user is already in a tough spot with wraparound failure,\nlike that client of yours. Nobody wants to force the failsafe for one\nspecific table. It's not general purpose, at all, and shouldn't claim\nto be.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 21 Dec 2021 13:56:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 5:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Dec 21, 2021 at 1:31 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > On second thought, we don't really need another number here. We could\n> > simply go by the existing failsafe parameter, and if the admin wants a\n> > different value, it's already possible to specify\n> > vacuum_failsafe_age/vacuum_multixact_failsafe_age in a session,\n> > including in single-user mode. Perhaps a new boolean called\n> > FAILSAFE_ONLY. If no table is specified, then when generating the list\n> > of tables, include only those with relfrozenxid/relminmxid greater\n> > than their failsafe thresholds.\n>\n> That's equivalent to the quick and dirty patch I wrote (assuming that\n> the user actually uses this new FAILSAFE_ONLY thing).\n\nEquivalent but optional.\n\n> But if we're going to add a new option to the VACUUM command (or\n> something of similar scope), then we might as well add a new behavior\n> that is reasonably exact -- something that (say) only *starts* a\n> VACUUM for those tables whose relfrozenxid age currently exceeds half\n> the autovacuum_freeze_max_age for the table (usually taken from the\n> GUC, sometimes taken from the reloption), which also forces the\n> failsafe. And with similar handling for\n> relminmxid/autovacuum_multixact_freeze_max_age.\n>\n> In other words, while triggering the failsafe is important, simply *not\n> starting* VACUUM for relations where there is really no need for it is\n> at least as important. We shouldn't even think about pruning or\n> freezing with these tables.\n\nRight, not starting where not necessary is crucial for getting out of\nsingle-user mode as quickly as possible. We're in agreement there.\n\n> (ISTM that the only thing that might be a\n> bit controversial about any of this is my definition of \"safe\", which\n> seems like about the right trade-off to me.)\n\nIt seems reasonable to want to start back up and not immediately have\nanti-wraparound vacuums kick in. On the other hand, it's not good to\ndo work while unable to monitor progress, and while more WAL is piling\nup. I'm not sure where the right trade off is.\n\n> This new command/facility should probably not be a new flag to the\n> VACUUM command, as such. Rather, I think that it should either be an\n> SQL-callable function, or a dedicated top-level command (that doesn't\n> accept any tables). The only reason to have this is for scenarios\n> where the user is already in a tough spot with wraparound failure,\n> like that client of yours. Nobody wants to force the failsafe for one\n> specific table. It's not general purpose, at all, and shouldn't claim\n> to be.\n\nMakes sense, I'll have a think about what that would look like.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 19:02:12 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Dec 22, 2021 at 6:56 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Dec 21, 2021 at 1:31 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > On second thought, we don't really need another number here. We could\n> > simply go by the existing failsafe parameter, and if the admin wants a\n> > different value, it's already possible to specify\n> > vacuum_failsafe_age/vacuum_multixact_failsafe_age in a session,\n> > including in single-user mode. Perhaps a new boolean called\n> > FAILSAFE_ONLY. If no table is specified, then when generating the list\n> > of tables, include only those with relfrozenxid/relminmxid greater\n> > than their failsafe thresholds.\n>\n> That's equivalent to the quick and dirty patch I wrote (assuming that\n> the user actually uses this new FAILSAFE_ONLY thing).\n>\n> But if we're going to add a new option to the VACUUM command (or\n> something of similar scope), then we might as well add a new behavior\n> that is reasonably exact -- something that (say) only *starts* a\n> VACUUM for those tables whose relfrozenxid age currently exceeds half\n> the autovacuum_freeze_max_age for the table (usually taken from the\n> GUC, sometimes taken from the reloption), which also forces the\n> failsafe. And with similar handling for\n> relminmxid/autovacuum_multixact_freeze_max_age.\n>\n> In other words, while triggering the failsafe is important, simply *not\n> starting* VACUUM for relations where there is really no need for it is\n> at least as important. We shouldn't even think about pruning or\n> freezing with these tables. (ISTM that the only thing that might be a\n> bit controversial about any of this is my definition of \"safe\", which\n> seems like about the right trade-off to me.)\n\n+1\n\n>\n> This new command/facility should probably not be a new flag to the\n> VACUUM command, as such. Rather, I think that it should either be an\n> SQL-callable function, or a dedicated top-level command (that doesn't\n> accept any tables). The only reason to have this is for scenarios\n> where the user is already in a tough spot with wraparound failure,\n> like that client of yours. Nobody wants to force the failsafe for one\n> specific table. It's not general purpose, at all, and shouldn't claim\n> to be.\n\nEven not in the situation where the database has to run as the\nsingle-user mode to freeze tuples, I think there would be some use\ncases where users want to run vacuum (in failsafe mode) on tables with\nrelfrozenxid/relminmxid greater than their failsafe thresholds before\nfalling into that situation. I think it’s common that users are\nsomehow monitoring relfrozenxid/relminmxid and want to manually run\nvacuum on them rather than relying on autovacuums. --min-xid-age\noption and --min-mxid-age option of vacuumdb command would be good\nexamples. So I think this new command/facility might not necessarily\nneed to be specific to single-user mode.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 22 Dec 2021 11:39:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 10:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 22, 2021 at 6:56 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > This new command/facility should probably not be a new flag to the\n> > VACUUM command, as such. Rather, I think that it should either be an\n> > SQL-callable function, or a dedicated top-level command (that doesn't\n> > accept any tables). The only reason to have this is for scenarios\n> > where the user is already in a tough spot with wraparound failure,\n> > like that client of yours. Nobody wants to force the failsafe for one\n> > specific table. It's not general purpose, at all, and shouldn't claim\n> > to be.\n>\n> Even not in the situation where the database has to run as the\n> single-user mode to freeze tuples, I think there would be some use\n> cases where users want to run vacuum (in failsafe mode) on tables with\n> relfrozenxid/relminmxid greater than their failsafe thresholds before\n> falling into that situation. I think it’s common that users are\n> somehow monitoring relfrozenxid/relminmxid and want to manually run\n> vacuum on them rather than relying on autovacuums. --min-xid-age\n> option and --min-mxid-age option of vacuumdb command would be good\n> examples. So I think this new command/facility might not necessarily\n> need to be specific to single-user mode.\n\nIf we want to leave open the possibility to specify these parameters,\na SQL-callable function seems like the way to go. And even if we\ndon't, a function is fine.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Dec 2021 11:07:49 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 6:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Even not in the situation where the database has to run as the\n> single-user mode to freeze tuples, I think there would be some use\n> cases where users want to run vacuum (in failsafe mode) on tables with\n> relfrozenxid/relminmxid greater than their failsafe thresholds before\n> falling into that situation. I think it’s common that users are\n> somehow monitoring relfrozenxid/relminmxid and want to manually run\n> vacuum on them rather than relying on autovacuums. --min-xid-age\n> option and --min-mxid-age option of vacuumdb command would be good\n> examples. So I think this new command/facility might not necessarily\n> need to be specific to single-user mode.\n\nIt wouldn't be specific to single-user mode, since that is not really\nspecial. It's only special in a way that's quite artificial (it can\ncontinue to allocate XIDs past the point where we usually deem it\nunsafe).\n\nSo, I think we agree; this new emergency vacuuming feature shouldn't\nbe restricted to single-user mode in any way, and shouldn't care about\nwhether we're in single user mode or not when run. OTOH, it probably\nwill be presented as something that is typically run in single user\nmode, in situations like the one John's customer found themselves in\n-- disastrous, unpleasant situations. It's not just a good policy\n(that makes testing easy). The same kind of problem can easily be\ncaught a little earlier, before the system actually becomes unable to\nallocate new XIDs (when not in single-user mode) -- that's quite\nlikely, and almost as scary.\n\nAs I said before, ISTM that the important thing is to have something\ndead simple -- something that is easy to use when woken at 4am, when\nthe DBA is tired and stressed. Something that makes generic choices,\nthat are not way too conservative, but also don't risk making the\nproblem worse instead of better.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Dec 2021 13:33:28 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Dec 21, 2021 at 4:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> But if we're going to add a new option to the VACUUM command (or\n> something of similar scope), then we might as well add a new behavior\n> that is reasonably exact -- something that (say) only *starts* a\n> VACUUM for those tables whose relfrozenxid age currently exceeds half\n> the autovacuum_freeze_max_age for the table (usually taken from the\n> GUC, sometimes taken from the reloption), which also forces the\n> failsafe. And with similar handling for\n> relminmxid/autovacuum_multixact_freeze_max_age.\n\n> This new command/facility should probably not be a new flag to the\n> VACUUM command, as such. Rather, I think that it should either be an\n> SQL-callable function, or a dedicated top-level command (that doesn't\n> accept any tables). The only reason to have this is for scenarios\n> where the user is already in a tough spot with wraparound failure,\n> like that client of yours. Nobody wants to force the failsafe for one\n> specific table. It's not general purpose, at all, and shouldn't claim\n> to be.\n\nI've attached a PoC *untested* patch to show what it would look like\nas a top-level statement. If the \"shape\" is uncontroversial, I'll put\nwork into testing it and fleshing it out.\n\nFor the PoC I wanted to try re-using existing keywords. I went with\n\"VACUUM LIMIT\" since LIMIT is already a keyword that cannot be used as\na table name. It also brings \"wraparound limit\" to mind. We could add\na single-use unreserved keyword (such as VACUUM_MINIMAL or\nVACUUM_FAST), but that doesn't seem great.\n\n> In other words, while triggering the failsafe is important, simply *not\n> starting* VACUUM for relations where there is really no need for it is\n> at least as important. We shouldn't even think about pruning or\n> freezing with these tables. (ISTM that the only thing that might be a\n> bit controversial about any of this is my definition of \"safe\", which\n> seems like about the right trade-off to me.)\n\nI'm not sure what the right trade-off is, but as written I used 95% of\nmax age. It might be undesirable to end up so close to kicking off\nuninterruptible vacuums, but the point is to get out of single-user\nmode and back to streaming WAL as quickly as possible. It might also\nbe worth overriding the min ages as well, but haven't done so here.\n\nIt can be executed in normal mode (although it's not expected to be),\nwhich makes testing easier and allows for a future possibility of not\nrequiring shutdown at all, by e.g. terminating non-superuser\nconnections.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 11 Jan 2022 19:58:56 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Jan 11, 2022 at 4:59 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I've attached a PoC *untested* patch to show what it would look like\n> as a top-level statement. If the \"shape\" is uncontroversial, I'll put\n> work into testing it and fleshing it out.\n\nGreat!\n\n> For the PoC I wanted to try re-using existing keywords. I went with\n> \"VACUUM LIMIT\" since LIMIT is already a keyword that cannot be used as\n> a table name. It also brings \"wraparound limit\" to mind. We could add\n> a single-use unreserved keyword (such as VACUUM_MINIMAL or\n> VACUUM_FAST), but that doesn't seem great.\n\nThis seems reasonable, but you could add a new option instead, without\nmuch downside. While INDEX_CLEANUP kind of looks like a keyword, it\nisn't really a keyword. (Perhaps you knew this already.)\n\nMaking this a new option is a little awkward, admittedly. It's not\nclear what it means to \"VACUUM (LIMIT) my_table\" -- do you just throw\nan error for stuff like that? So perhaps your approach of adding\nVacuumMinimalStmt (a minimal variant of the VACUUM command) is better.\n\n> I'm not sure what the right trade-off is, but as written I used 95% of\n> max age. It might be undesirable to end up so close to kicking off\n> uninterruptible vacuums, but the point is to get out of single-user\n> mode and back to streaming WAL as quickly as possible. It might also\n> be worth overriding the min ages as well, but haven't done so here.\n\nI wonder if we should keep autovacuum_freeze_max_age out of it -- its\ndefault is too conservative in general. I'm concerned that applying\nthis autovacuum_freeze_max_age test during VACUUM LIMIT doesn't go far\nenough -- it may require VACUUM LIMIT to do significantly more work\nthan is needed to get the system back online (while leaving a sensible\namount of headroom). Also seems like it might be a good idea to avoid\nrelying on the user configuration, given that VACUUM LIMIT is only run\nwhen everything is already in disarray. (Besides, it's not clear that\nit's okay to use the autovacuum_freeze_max_age GUC without also using\nthe reloption of the same name.)\n\nWhat do you think of applying a similar test using a generic 1 billion\nXID (and 1 billion MXID) age cutoff? When VACUUM LIMIT is run, we've\nalready learned that the *entire* XID space wasn't sufficient for the\nuser workload, so we're not really in a position to promise much.\nOften the real problem will be something like a leaked replication\nslot, or application code that's seriously misbehaving. It's really\nthe DBA's job to *keep* the system up. VACUUM LIMIT is just a tool\nthat allows the DBA to do this without excessive downtime.\n\nThe GetNewTransactionId() WARNINGs ought to be changed to reference\nVACUUM LIMIT. (You probably just didn't get around to that in this\nPOC, but couldn't hurt to remind you.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Jan 2022 17:56:35 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Jan 11, 2022 at 07:58:56PM -0500, John Naylor wrote:\n> +\t\t// FIXME: also check reloption\n> +\t\t// WIP: 95% is a starting point for discussion\n> +\t\tif ((table_xid_age < autovacuum_freeze_max_age * 0.95) ||\n> +\t\t\t(table_mxid_age < autovacuum_multixact_freeze_max_age * 0.95))\n> +\t\t\tcontinue;\n\nShould be &&\n\nShould this emergency vacuum \"order by age() DESC\" ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 11 Jan 2022 20:20:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 10:57 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 11, 2022 at 4:59 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > I've attached a PoC *untested* patch to show what it would look like\n> > as a top-level statement. If the \"shape\" is uncontroversial, I'll put\n> > work into testing it and fleshing it out.\n>\n> Great!\n\n+1\n\n>\n> > For the PoC I wanted to try re-using existing keywords. I went with\n> > \"VACUUM LIMIT\" since LIMIT is already a keyword that cannot be used as\n> > a table name. It also brings \"wraparound limit\" to mind. We could add\n> > a single-use unreserved keyword (such as VACUUM_MINIMAL or\n> > VACUUM_FAST), but that doesn't seem great.\n>\n> This seems reasonable, but you could add a new option instead, without\n> much downside. While INDEX_CLEANUP kind of looks like a keyword, it\n> isn't really a keyword. (Perhaps you knew this already.)\n>\n> Making this a new option is a little awkward, admittedly. It's not\n> clear what it means to \"VACUUM (LIMIT) my_table\" -- do you just throw\n> an error for stuff like that? So perhaps your approach of adding\n> VacuumMinimalStmt (a minimal variant of the VACUUM command) is better.\n\nIt seems to me that adding new syntax instead of a new option is less\nflexible. In the future, for instance, when we support parallel heap\nscan for VACUUM, we may want to add a parallel-related option to both\nVACUUM statement and VACUUM LIMIT statement. VACUUM LIMIT statement\nwould end up becoming like VACUUM statement?\n\nAs another idea, we might be able to add a new option that takes an\noptional integer value, like VACUUM (MIN_XID), VACUUM (MIN_MXID), and\nVACUUM (MIN_XID 500000). We vacuum only tables whose age is older than\nthe given value. If the value is omitted, we vacuum only tables whose\nage exceeds a threshold (say autovacuum_freeze_max_age * 0.95), which\ncan be used in an emergency case and output in GetNewTransactionID()\nWARNINGs output. vacuumdb’s --min-xid-age and --min-mxid-age can use\nthis option instead of fetching the list of tables from the server.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 12 Jan 2022 15:48:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Jan 11, 2022 at 8:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Jan 11, 2022 at 4:59 PM John Naylor\n> > For the PoC I wanted to try re-using existing keywords. I went with\n> > \"VACUUM LIMIT\" since LIMIT is already a keyword that cannot be used as\n> > a table name. It also brings \"wraparound limit\" to mind. We could add\n> > a single-use unreserved keyword (such as VACUUM_MINIMAL or\n> > VACUUM_FAST), but that doesn't seem great.\n>\n> This seems reasonable, but you could add a new option instead, without\n> much downside. While INDEX_CLEANUP kind of looks like a keyword, it\n> isn't really a keyword. (Perhaps you knew this already.)\n>\n> Making this a new option is a little awkward, admittedly. It's not\n> clear what it means to \"VACUUM (LIMIT) my_table\" -- do you just throw\n> an error for stuff like that? So perhaps your approach of adding\n> VacuumMinimalStmt (a minimal variant of the VACUUM command) is better.\n\nWe'd also have to do some checks to either ignore other options or\nthrow an error, which seems undesirable for code maintenance. For that\nreason, I prefer the separate top-level statement, but I'm open to\nbike-shedding on the actual syntax. I also briefly looked into a SQL\nfunction, but the transaction management would make that more\ndifficult.\n\n> > I'm not sure what the right trade-off is, but as written I used 95% of\n> > max age. It might be undesirable to end up so close to kicking off\n> > uninterruptible vacuums, but the point is to get out of single-user\n> > mode and back to streaming WAL as quickly as possible. It might also\n> > be worth overriding the min ages as well, but haven't done so here.\n>\n> I wonder if we should keep autovacuum_freeze_max_age out of it -- its\n> default is too conservative in general. I'm concerned that applying\n> this autovacuum_freeze_max_age test during VACUUM LIMIT doesn't go far\n> enough -- it may require VACUUM LIMIT to do significantly more work\n> than is needed to get the system back online (while leaving a sensible\n> amount of headroom). Also seems like it might be a good idea to avoid\n> relying on the user configuration, given that VACUUM LIMIT is only run\n> when everything is already in disarray. (Besides, it's not clear that\n> it's okay to use the autovacuum_freeze_max_age GUC without also using\n> the reloption of the same name.)\n>\n> What do you think of applying a similar test using a generic 1 billion\n> XID (and 1 billion MXID) age cutoff?\n\nI like that a lot, actually. It's simple and insulates us from\nwondering about corner cases in configuration.\n\n> The GetNewTransactionId() WARNINGs ought to be changed to reference\n> VACUUM LIMIT. (You probably just didn't get around to that in this\n> POC, but couldn't hurt to remind you.)\n\nI'll do that as well as documentation after we have agreement (or at\nleast lack of objection) on the syntax.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 10:07:47 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Jan 11, 2022 at 9:20 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 07:58:56PM -0500, John Naylor wrote:\n> > + // FIXME: also check reloption\n> > + // WIP: 95% is a starting point for discussion\n> > + if ((table_xid_age < autovacuum_freeze_max_age * 0.95) ||\n> > + (table_mxid_age < autovacuum_multixact_freeze_max_age * 0.95))\n> > + continue;\n>\n> Should be &&\n\nThanks! Will fix.\n\n> Should this emergency vacuum \"order by age() DESC\" ?\n\nThat would add complexity and only save a marginal amount of time.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 10:09:49 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 1:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> It seems to me that adding new syntax instead of a new option is less\n> flexible. In the future, for instance, when we support parallel heap\n> scan for VACUUM, we may want to add a parallel-related option to both\n> VACUUM statement and VACUUM LIMIT statement. VACUUM LIMIT statement\n> would end up becoming like VACUUM statement?\n\nThis is intended for single-user mode, so parallelism is not relevant.\n\n> As another idea, we might be able to add a new option that takes an\n> optional integer value, like VACUUM (MIN_XID), VACUUM (MIN_MXID), and\n> VACUUM (MIN_XID 500000). We vacuum only tables whose age is older than\n> the given value. If the value is omitted, we vacuum only tables whose\n> age exceeds a threshold (say autovacuum_freeze_max_age * 0.95), which\n> can be used in an emergency case and output in GetNewTransactionID()\n> WARNINGs output. vacuumdb’s --min-xid-age and --min-mxid-age can use\n> this option instead of fetching the list of tables from the server.\n\nThat could work, and maybe also have general purpose, but I see two\nproblems if I understand you correctly:\n\n- If we have a default threshold when the values are omitted, that\nimplies we need to special-case single-user mode with non-obvious\nbehavior, which is not ideal, as Andres mentioned upthread. (Or, now\nmanual VACUUM by default would not do anything, except in extreme\ncases, which is worse.)\n\n- In the single-user case, the admin would still need to add\nINDEX_CLEANUP = off for minimum downtime, and it should be really\nsimple.\n\n- For the general case, we would now have the ability to vacuum a\ntable, and possibly have no effect at all. That seems out of place\nwith the other options.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 10:42:06 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 1/12/22, 7:43 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> On Wed, Jan 12, 2022 at 1:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>> As another idea, we might be able to add a new option that takes an\r\n>> optional integer value, like VACUUM (MIN_XID), VACUUM (MIN_MXID), and\r\n>> VACUUM (MIN_XID 500000). We vacuum only tables whose age is older than\r\n>> the given value. If the value is omitted, we vacuum only tables whose\r\n>> age exceeds a threshold (say autovacuum_freeze_max_age * 0.95), which\r\n>> can be used in an emergency case and output in GetNewTransactionID()\r\n>> WARNINGs output. vacuumdb’s --min-xid-age and --min-mxid-age can use\r\n>> this option instead of fetching the list of tables from the server.\r\n>\r\n> That could work, and maybe also have general purpose, but I see two\r\n> problems if I understand you correctly:\r\n>\r\n> - If we have a default threshold when the values are omitted, that\r\n> implies we need to special-case single-user mode with non-obvious\r\n> behavior, which is not ideal, as Andres mentioned upthread. (Or, now\r\n> manual VACUUM by default would not do anything, except in extreme\r\n> cases, which is worse.)\r\n\r\nI agree, I don't think such options should have a default value.\r\n\r\n> - In the single-user case, the admin would still need to add\r\n> INDEX_CLEANUP = off for minimum downtime, and it should be really\r\n> simple.\r\n>\r\n> - For the general case, we would now have the ability to vacuum a\r\n> table, and possibly have no effect at all. That seems out of place\r\n> with the other options.\r\n\r\nPerhaps a message would be emitted when tables are specified but\r\nskipped due to the min-xid-age option.\r\n\r\nAs I've stated upthread, Sawada-san's suggested approach was my\r\ninitial reaction to this thread. I'm not wedded to the idea of adding\r\nnew options, but I think there are a couple of advantages. For both\r\nsingle-user mode and normal operation (which may be in imminent\r\nwraparound danger), you could use the same command:\r\n\r\n VACUUM (MIN_XID_AGE 1600000000, ...);\r\n\r\n(As an aside, we'd need to figure out how XID and MXID options would\r\nwork together. Presumably most users would want to OR them.)\r\n\r\nThis doesn't really tie in super nicely with the failsafe mechanism,\r\nbut adding something like a FAILSAFE option doesn't seem right to me,\r\nas it's basically just an alias for a bunch of other options. In my\r\nmind, even a new top-level command would just be an alias for the\r\naforementioned command. Of course, providing a new option is not\r\nquite as simple as opening up single-user mode and typing \"BAIL OUT,\"\r\nbut I don't know if it is prohibitively complicated for end users.\r\nThey'll already have had to figure out how to start single-user mode\r\nin the first place, and we can have nice ERROR/WARNING messages that\r\nprovide a suggested VACUUM command.\r\n\r\nThe other advantage I see with age-related options is that it can be\r\nuseful for non-imminent-wraparound situations as well. For example,\r\nmaybe a user just wants to manually vacuum everything (including\r\nindexes) with an age above 500M on the weekends.\r\n\r\nAnother idea is to do both. We could add age-related options, and we\r\ncould also add a \"BAIL OUT\" command that is just an alias for a\r\nspecial VACUUM command that we feel will help get things under control\r\nas quickly as possible.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 17:25:55 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 12:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> > - For the general case, we would now have the ability to vacuum a\n> > table, and possibly have no effect at all. That seems out of place\n> > with the other options.\n>\n> Perhaps a message would be emitted when tables are specified but\n> skipped due to the min-xid-age option.\n>\n> As I've stated upthread, Sawada-san's suggested approach was my\n> initial reaction to this thread. I'm not wedded to the idea of adding\n> new options, but I think there are a couple of advantages. For both\n> single-user mode and normal operation (which may be in imminent\n> wraparound danger), you could use the same command:\n>\n> VACUUM (MIN_XID_AGE 1600000000, ...);\n\nMy proposed top-level statement can also be used in normal operation,\nso the only possible advantage is configurability. But I don't really\nsee any advantage in that -- I don't think we should be moving in the\ndirection of adding more-intricate ways to paper over the deficiencies\nin autovacuum scheduling. (It could be argued that I'm doing exactly\nthat in this whole thread, but [imminent] shutdown situations have\nother causes besides deficient scheduling.)\n\n> (As an aside, we'd need to figure out how XID and MXID options would\n> work together. Presumably most users would want to OR them.)\n>\n> This doesn't really tie in super nicely with the failsafe mechanism,\n> but adding something like a FAILSAFE option doesn't seem right to me,\n\nI agree -- it would be awkward and messy as an option. However, I see\nthe same problem with xid/mxid -- I would actually argue they are not\neven proper options; they are \"selectors\". Your comments above about\n1) needing to OR them and 2) emitting a message when a VACUUM command\ndoesn't actually do anything are evidence of that fact.\n\n> The other advantage I see with age-related options is that it can be\n> useful for non-imminent-wraparound situations as well. For example,\n> maybe a user just wants to manually vacuum everything (including\n> indexes) with an age above 500M on the weekends.\n\nThere is already vaccumdb for that, and I think it's method of\nselecting tables is sound -- I'm not convinced that pushing table\nselection to the server command as \"options\" is an improvement.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 07:57:44 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 1/13/22, 4:58 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> On Wed, Jan 12, 2022 at 12:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> As I've stated upthread, Sawada-san's suggested approach was my\r\n>> initial reaction to this thread. I'm not wedded to the idea of adding\r\n>> new options, but I think there are a couple of advantages. For both\r\n>> single-user mode and normal operation (which may be in imminent\r\n>> wraparound danger), you could use the same command:\r\n>>\r\n>> VACUUM (MIN_XID_AGE 1600000000, ...);\r\n>\r\n> My proposed top-level statement can also be used in normal operation,\r\n> so the only possible advantage is configurability. But I don't really\r\n> see any advantage in that -- I don't think we should be moving in the\r\n> direction of adding more-intricate ways to paper over the deficiencies\r\n> in autovacuum scheduling. (It could be argued that I'm doing exactly\r\n> that in this whole thread, but [imminent] shutdown situations have\r\n> other causes besides deficient scheduling.)\r\n\r\nThe new top-level command would be configurable, right? Your patch\r\nuses autovacuum_freeze_max_age/autovacuum_multixact_freeze_max_age, so\r\nthe behavior of this new command now depends on the values of\r\nparameters that won't obviously be related to it. If these parameters\r\nare set very low (e.g., the default values), then this command will\r\nend up doing far more work than is probably necessary.\r\n\r\nIf we did go the route of using a parameter to determine which tables\r\nto vacuum, I think vacuum_failsafe_age is a much better candidate, as\r\nit defaults to a much higher value that is more likely to prevent\r\ndoing extra work. That being said, I don't know if overloading\r\nparameters is the right way to go.\r\n\r\n>> (As an aside, we'd need to figure out how XID and MXID options would\r\n>> work together. Presumably most users would want to OR them.)\r\n>>\r\n>> This doesn't really tie in super nicely with the failsafe mechanism,\r\n>> but adding something like a FAILSAFE option doesn't seem right to me,\r\n>\r\n> I agree -- it would be awkward and messy as an option. However, I see\r\n> the same problem with xid/mxid -- I would actually argue they are not\r\n> even proper options; they are \"selectors\". Your comments above about\r\n> 1) needing to OR them and 2) emitting a message when a VACUUM command\r\n> doesn't actually do anything are evidence of that fact.\r\n\r\nThat's a fair point. But I don't think these problems are totally\r\nintractable. We already emit \"skipping\" messages from VACUUM\r\nsometimes, and interactions between VACUUM options exist today, too.\r\nFor example, FREEZE is redundant when FULL is specified, and\r\nINDEX_CLEANUP is totally ignored when FULL is used.\r\n\r\n>> The other advantage I see with age-related options is that it can be\r\n>> useful for non-imminent-wraparound situations as well. For example,\r\n>> maybe a user just wants to manually vacuum everything (including\r\n>> indexes) with an age above 500M on the weekends.\r\n>\r\n> There is already vaccumdb for that, and I think it's method of\r\n> selecting tables is sound -- I'm not convinced that pushing table\r\n> selection to the server command as \"options\" is an improvement.\r\n\r\nI guess I'm ultimately imagining the new options as replacing the\r\nvacuumdb implementation. IOW vacuumdb would just use MIN_(M)XID_AGE\r\nbehind the scenes (as would a new top-level command).\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 13 Jan 2022 22:04:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "I see a CF entry has been created already, and the cfbot doesn't like\nmy PoC. To prevent confusion, I've taken the liberty of switching the\nauthor to myself and set to Waiting on Author. FWIW, my local build\npassed make check-world after applying Justin's fix and changing a\ncouple other things.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Jan 2022 11:21:54 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Fri, Jan 14, 2022 at 7:04 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 1/13/22, 4:58 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\n> > On Wed, Jan 12, 2022 at 12:26 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> As I've stated upthread, Sawada-san's suggested approach was my\n> >> initial reaction to this thread. I'm not wedded to the idea of adding\n> >> new options, but I think there are a couple of advantages. For both\n> >> single-user mode and normal operation (which may be in imminent\n> >> wraparound danger), you could use the same command:\n> >>\n> >> VACUUM (MIN_XID_AGE 1600000000, ...);\n> >\n> > My proposed top-level statement can also be used in normal operation,\n> > so the only possible advantage is configurability. But I don't really\n> > see any advantage in that -- I don't think we should be moving in the\n> > direction of adding more-intricate ways to paper over the deficiencies\n> > in autovacuum scheduling. (It could be argued that I'm doing exactly\n> > that in this whole thread, but [imminent] shutdown situations have\n> > other causes besides deficient scheduling.)\n>\n> The new top-level command would be configurable, right? Your patch\n> uses autovacuum_freeze_max_age/autovacuum_multixact_freeze_max_age, so\n> the behavior of this new command now depends on the values of\n> parameters that won't obviously be related to it. If these parameters\n> are set very low (e.g., the default values), then this command will\n> end up doing far more work than is probably necessary.\n>\n> If we did go the route of using a parameter to determine which tables\n> to vacuum, I think vacuum_failsafe_age is a much better candidate, as\n> it defaults to a much higher value that is more likely to prevent\n> doing extra work. That being said, I don't know if overloading\n> parameters is the right way to go.\n>\n> >> (As an aside, we'd need to figure out how XID and MXID options would\n> >> work together. Presumably most users would want to OR them.)\n> >>\n> >> This doesn't really tie in super nicely with the failsafe mechanism,\n> >> but adding something like a FAILSAFE option doesn't seem right to me,\n> >\n> > I agree -- it would be awkward and messy as an option. However, I see\n> > the same problem with xid/mxid -- I would actually argue they are not\n> > even proper options; they are \"selectors\". Your comments above about\n> > 1) needing to OR them and 2) emitting a message when a VACUUM command\n> > doesn't actually do anything are evidence of that fact.\n>\n> That's a fair point. But I don't think these problems are totally\n> intractable. We already emit \"skipping\" messages from VACUUM\n> sometimes, and interactions between VACUUM options exist today, too.\n> For example, FREEZE is redundant when FULL is specified, and\n> INDEX_CLEANUP is totally ignored when FULL is used.\n>\n> >> The other advantage I see with age-related options is that it can be\n> >> useful for non-imminent-wraparound situations as well. For example,\n> >> maybe a user just wants to manually vacuum everything (including\n> >> indexes) with an age above 500M on the weekends.\n\nI also think there is a use case where a user just wants to manually\nvacuum tables that are older than a certain threshold. In this case,\nthey might want to specify VACUUM command options such as the parallel\noption while selecting tables.\n\n> >\n> > There is already vaccumdb for that, and I think it's method of\n> > selecting tables is sound -- I'm not convinced that pushing table\n> > selection to the server command as \"options\" is an improvement.\n\nI think that having the user not rely on vacuumdb by implementing it\non the server side would be an improvement.\n\n> I guess I'm ultimately imagining the new options as replacing the\n> vacuumdb implementation. IOW vacuumdb would just use MIN_(M)XID_AGE\n> behind the scenes (as would a new top-level command).\n\nI had the same idea.\n\nThat having been said, I agree that xid/mxid options are different\nthings from the existing VACUUM command options; whereas the existing\nVACUUM options control its behavior, xid/mxid options are selectors\nfor tables to vacuum (PROCESS_TOAST option could be a selector but I\nthink it’s slightly different from xid/mxid options).\n\nIIUC what we want to do here are two things: (1) select only old\ntables and (2) set INDEX_CLEANUP = off, TRUNCATE = off, and FREEZE =\non. VACUUM LIMIT statement does both things at the same time. Although\nI’m concerned a bit about its flexibility, it’s a reasonable solution.\n\nOn the other hand, it’s probably also useful to do either one thing in\nsome cases. For instance, having a selector for (1) would be useful,\nand having a new option like FAST_FREEZE for (2) would also be useful.\nGiven there is already a way for (2) (it does not default though), I\nthink it might also be a good start inventing something for (1). For\ninstance, a selector for VACUUM statement I came up with is:\n\nVACUUM (verbose on) TABLES WITH (min_xid_age = 1600000000);\nor\nVACUUM (verbose on) TABLES WITH (min_age = failsafe_limit);\n\nWe can expand it in the future to select tables by, for example, dead\ntuple ratio, size, etc.\n\nIt's a random thought but maybe worth considering.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 19 Jan 2022 14:46:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 19, 2022 at 12:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 7:04 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >\n> > I guess I'm ultimately imagining the new options as replacing the\n> > vacuumdb implementation. IOW vacuumdb would just use MIN_(M)XID_AGE\n> > behind the scenes (as would a new top-level command).\n>\n> I had the same idea.\n\nThis seems to be the motivating reason for wanting new configurability\non the server side. In any case, new knobs are out of scope for this\nthread. If the use case is compelling enough, may I suggest starting a\nnew thread?\n\nRegarding the thread subject, I've been playing with the grammar, and\nfound it's quite easy to have\n\nVACUUM FOR WRAPAROUND\nor\nVACUUM FOR EMERGENCY\n\nsince FOR is a reserved word (and following that can be an IDENT plus\na strcmp check) and cannot conflict with table names. This sounds a\nbit more natural than VACUUM LIMIT. Opinions?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 14:14:13 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 1/18/22, 9:47 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n> IIUC what we want to do here are two things: (1) select only old\r\n> tables and (2) set INDEX_CLEANUP = off, TRUNCATE = off, and FREEZE =\r\n> on. VACUUM LIMIT statement does both things at the same time. Although\r\n> I’m concerned a bit about its flexibility, it’s a reasonable solution.\r\n>\r\n> On the other hand, it’s probably also useful to do either one thing in\r\n> some cases. For instance, having a selector for (1) would be useful,\r\n> and having a new option like FAST_FREEZE for (2) would also be useful.\r\n> Given there is already a way for (2) (it does not default though), I\r\n> think it might also be a good start inventing something for (1). For\r\n> instance, a selector for VACUUM statement I came up with is:\r\n>\r\n> VACUUM (verbose on) TABLES WITH (min_xid_age = 1600000000);\r\n> or\r\n> VACUUM (verbose on) TABLES WITH (min_age = failsafe_limit);\r\n>\r\n> We can expand it in the future to select tables by, for example, dead\r\n> tuple ratio, size, etc.\r\n>\r\n> It's a random thought but maybe worth considering.\r\n\r\nThat's an interesting idea. A separate selector clause could also\r\nallow users to choose how they interacted (e.g., should the options be\r\nOR'd or AND'd).\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 19 Jan 2022 19:57:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 1/19/22, 11:15 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> This seems to be the motivating reason for wanting new configurability\r\n> on the server side. In any case, new knobs are out of scope for this\r\n> thread. If the use case is compelling enough, may I suggest starting a\r\n> new thread?\r\n\r\nSure. Perhaps the new top-level command will use these new options\r\nsomeday.\r\n\r\n> Regarding the thread subject, I've been playing with the grammar, and\r\n> found it's quite easy to have\r\n>\r\n> VACUUM FOR WRAPAROUND\r\n> or\r\n> VACUUM FOR EMERGENCY\r\n>\r\n> since FOR is a reserved word (and following that can be an IDENT plus\r\n> a strcmp check) and cannot conflict with table names. This sounds a\r\n> bit more natural than VACUUM LIMIT. Opinions?\r\n\r\nI personally think VACUUM FOR WRAPAROUND is the best of the options\r\nprovided thus far.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 19 Jan 2022 21:11:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 19, 2022 at 09:11:48PM +0000, Bossart, Nathan wrote:\n> I personally think VACUUM FOR WRAPAROUND is the best of the options\n> provided thus far.\n\nCould you avoid introducing a new grammar pattern in VACUUM? Any new\noption had better be within the parenthesized part as it is extensible\nat will with its set of DefElems.\n--\nMichael", "msg_date": "Thu, 20 Jan 2022 07:26:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Jan 20, 2022 at 4:14 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 12:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jan 14, 2022 at 7:04 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > >\n> > > I guess I'm ultimately imagining the new options as replacing the\n> > > vacuumdb implementation. IOW vacuumdb would just use MIN_(M)XID_AGE\n> > > behind the scenes (as would a new top-level command).\n> >\n> > I had the same idea.\n>\n> This seems to be the motivating reason for wanting new configurability\n> on the server side. In any case, new knobs are out of scope for this\n> thread. If the use case is compelling enough, may I suggest starting a\n> new thread?\n\nThe purpose of this thread is to provide a way for users to run vacuum\nonly very old tables (while skipping index cleanup, etc.), and the way\nis not limited to introducing a new top-level VACUUM statement yet,\nright? A new top-level VACUUM statement you proposed seems a good idea\nbut trying to achieve it by extending the current VACUUM statement is\nalso a good idea. So I think the ideas like MIN_XID_AGE option and new\ntable selector in VACUUM statement are relevant to this thread.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 21 Jan 2022 14:58:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Jan 19, 2022 at 5:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Could you avoid introducing a new grammar pattern in VACUUM? Any new\n> option had better be within the parenthesized part as it is extensible\n> at will with its set of DefElems.\n\nThis new behavior is not an option that one can sensibly mix with\nother options as the user sees fit, but rather hard-codes the\nparameters for its single purpose. That said, I do understand your\nobjection.\n\n[*thinks*]\n\nHow about the attached patch (and test script)? It still needs polish,\nbut it could work. It allows \"verbose\" to coexist, although that's\nreally only for testing normal mode. While testing in single-user\nmode, I was sad to find out that it not only doesn't emit messages\n(not a client), but also doesn't log. That would have been a decent\nway to monitor progress...\n\nIn this form, I'm no longer a fan of calling the option \"wraparound\",\nbecause it's too close to the \"is_wraparound\" param member.\nInternally, at least, we can use \"emergency\" or \"minimal\". (In fact\nthe bit symbol is VACOPT_MINIMAL for this draft). That can be worked\nout later.\n\nOn Fri, Jan 21, 2022 at 12:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> The purpose of this thread is to provide a way for users to run vacuum\n> only very old tables (while skipping index cleanup, etc.),\n\nAh, thank you Sawada-san, now I understand why we have been talking\npast each other. The purpose is actually:\n\n- to have a simple, easy to type, command\n- intended for single-user mode, but not limited to it (so it's easy to test)\n- to get out of single user mode as quickly as possible\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 Jan 2022 17:41:58 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On 1/21/22, 2:43 PM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> - to have a simple, easy to type, command\r\n\r\nAFAICT the disagreement is really just about the grammar.\r\nSawada-san's idea would look something like\r\n\r\n VACUUM (FREEZE, INDEX_CLEANUP OFF, MIN_XID_AGE 1600000000, MIN_MXID_AGE 1600000000);\r\n\r\nwhile your proposal looks more like\r\n\r\n VACUUM (WRAPAROUND);\r\n\r\nThe former is highly configurable, but it is probably annoying to type\r\nat 3 AM, and the interaction between the two *_AGE options is not\r\nexactly intuitive (although I expect MIN_XID_AGE to be sufficient in\r\nmost cases). The latter is not as configurable, but it is much easier\r\nto type at 3 AM.\r\n\r\nI think simplicity is a good goal, but I don't know if the difference\r\nbetween the two approaches outweighs the benefits of configurability.\r\nIf you are in an emergency situation, you already will have to take\r\ndown the server, connect in single-user mode to the database(s) that\r\nneed vacuuming, and actually do the vacuuming. The wraparound\r\nWARNING/ERROR already has a HINT that describes the next steps\r\nrequired. Perhaps it would be enough to also emit an example VACUUM\r\ncommand to use.\r\n\r\nI think folks will find the configurability useful, too. With\r\nMIN_XID_AGE, it's super easy to have pg_cron vacuum everything over\r\n500M on the weekend (and also do index cleanup), which may allow you\r\nto use more relaxed autovacuum settings during the week. The docs\r\nalready have suggestions for manually vacuuming when the load is low\r\n[0], so I think it is reasonable to build additional support for this\r\nuse-case.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY\r\n\r\n", "msg_date": "Fri, 21 Jan 2022 23:32:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Fri, Jan 21, 2022 at 05:41:58PM -0500, John Naylor wrote:\n> On Wed, Jan 19, 2022 at 5:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Could you avoid introducing a new grammar pattern in VACUUM? Any new\n> > option had better be within the parenthesized part as it is extensible\n> > at will with its set of DefElems.\n> \n> This new behavior is not an option that one can sensibly mix with\n> other options as the user sees fit, but rather hard-codes the\n> parameters for its single purpose. That said, I do understand your\n> objection.\n\nThis seems better, and it's shorter too.\n\nI'm sure you meant \"&\" here (fixed in attached patch to appease the cfbot):\n+ if (options | VACOPT_MINIMAL) \n\nIt should either refuse to run if a list of tables is specified with MINIMAL,\nor it should filter that list by XID condition.\n\nAs for the name, it could be MINIMAL or FAILSAFE or EMERGENCY or ??\nI think the name should actually be a bit more descriptive, and maybe say XID,\nlike MINIMAL_XID or XID_EMERGENCY...\n\nNormally, options are independent, but VACUUM (MINIMAL) is a \"shortcut\" to a\nhardcoded set of options: freeze on, truncate off, cleanup off. So it refuses\nto be combined with other options - good.\n\nThis is effectively a shortcut to hypothetical parameters for selecting tables\nby XID/MXID age. In the future, someone could debate adding user-facing knobs\nfor table selection by age.\n\nI still wonder if the relations should be processed in order of decreasing age.\nAn admin might have increased autovacuum_freeze_max_age up to 2e9, and your\nquery might return thousands of tables, with a wide range of sizes and ages.\n\nProcessing them in order of decreasing age would allow the admin to quickly\nvacuum the oldest tables, and optionally interrupt vacuum to get out of single\nuser mode ASAP - even if their just want to run VACUUM(MINIMAL) in a normal\nbackend when services aren't offline. Processing them out of order might be\npretty surprising - they might run vacuum for an hour (or overnight), cancel\nit, attempt to start the DB in normal mode, and conclude that it made no\nvisible progress.\n\nOn Fri, Jan 21, 2022 at 12:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > The purpose of this thread is to provide a way for users to run vacuum\n> > only very old tables (while skipping index cleanup, etc.),\n> \n> Ah, thank you Sawada-san, now I understand why we have been talking\n> past each other. The purpose is actually:\n> \n> - to have a simple, easy to type, command\n> - intended for single-user mode, but not limited to it (so it's easy to test)\n> - to get out of single user mode as quickly as possible", "msg_date": "Thu, 27 Jan 2022 19:28:42 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 8:28 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I'm sure you meant \"&\" here (fixed in attached patch to appease the cfbot):\n> + if (options | VACOPT_MINIMAL)\n\nThanks for catching that! That copy-pasto was also masking my failure\nto process the option properly -- fixed in the attached as v5.\n\n> It should either refuse to run if a list of tables is specified with MINIMAL,\n> or it should filter that list by XID condition.\n\nI went with the former for simplicity. As a single-purpose option, it\nmakes sense.\n\n> As for the name, it could be MINIMAL or FAILSAFE or EMERGENCY or ??\n> I think the name should actually be a bit more descriptive, and maybe say XID,\n> like MINIMAL_XID or XID_EMERGENCY...\n\nI went with EMERGENCY in this version to reinforce its purpose in the\nmind of the user (and reader of this code).\n\n> Normally, options are independent, but VACUUM (MINIMAL) is a \"shortcut\" to a\n> hardcoded set of options: freeze on, truncate off, cleanup off. So it refuses\n> to be combined with other options - good.\n>\n> This is effectively a shortcut to hypothetical parameters for selecting tables\n> by XID/MXID age. In the future, someone could debate adding user-facing knobs\n> for table selection by age.\n\nI used the params struct in v5 for the emergency cutoff ages. Even\nwith the values hard-coded, it seems cleaner to keep them here.\n\n> I still wonder if the relations should be processed in order of decreasing age.\n> An admin might have increased autovacuum_freeze_max_age up to 2e9, and your\n> query might return thousands of tables, with a wide range of sizes and ages.\n>\n> Processing them in order of decreasing age would allow the admin to quickly\n> vacuum the oldest tables, and optionally interrupt vacuum to get out of single\n> user mode ASAP - even if their just want to run VACUUM(MINIMAL) in a normal\n> backend when services aren't offline. Processing them out of order might be\n> pretty surprising - they might run vacuum for an hour (or overnight), cancel\n> it, attempt to start the DB in normal mode, and conclude that it made no\n> visible progress.\n\nWhile that seems like a nice property to have, it does complicate\nthings, so can be left for follow-on work.\n\nAlso in v5:\n\n- It mentions the new command in the error hint in\nGetNewTransactionId(). I'm not sure if multi-word commands should be\nquoted like this.\n- A first draft of documentation\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 1 Feb 2022 16:50:31 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 6:50 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Thu, Jan 27, 2022 at 8:28 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > I'm sure you meant \"&\" here (fixed in attached patch to appease the cfbot):\n> > + if (options | VACOPT_MINIMAL)\n>\n> Thanks for catching that! That copy-pasto was also masking my failure\n> to process the option properly -- fixed in the attached as v5.\n>\n> > It should either refuse to run if a list of tables is specified with MINIMAL,\n> > or it should filter that list by XID condition.\n>\n> I went with the former for simplicity. As a single-purpose option, it\n> makes sense.\n>\n> > As for the name, it could be MINIMAL or FAILSAFE or EMERGENCY or ??\n> > I think the name should actually be a bit more descriptive, and maybe say XID,\n> > like MINIMAL_XID or XID_EMERGENCY...\n>\n> I went with EMERGENCY in this version to reinforce its purpose in the\n> mind of the user (and reader of this code).\n>\n> > Normally, options are independent, but VACUUM (MINIMAL) is a \"shortcut\" to a\n> > hardcoded set of options: freeze on, truncate off, cleanup off. So it refuses\n> > to be combined with other options - good.\n> >\n> > This is effectively a shortcut to hypothetical parameters for selecting tables\n> > by XID/MXID age. In the future, someone could debate adding user-facing knobs\n> > for table selection by age.\n>\n> I used the params struct in v5 for the emergency cutoff ages. Even\n> with the values hard-coded, it seems cleaner to keep them here.\n>\n> > I still wonder if the relations should be processed in order of decreasing age.\n> > An admin might have increased autovacuum_freeze_max_age up to 2e9, and your\n> > query might return thousands of tables, with a wide range of sizes and ages.\n> >\n> > Processing them in order of decreasing age would allow the admin to quickly\n> > vacuum the oldest tables, and optionally interrupt vacuum to get out of single\n> > user mode ASAP - even if their just want to run VACUUM(MINIMAL) in a normal\n> > backend when services aren't offline. Processing them out of order might be\n> > pretty surprising - they might run vacuum for an hour (or overnight), cancel\n> > it, attempt to start the DB in normal mode, and conclude that it made no\n> > visible progress.\n>\n> While that seems like a nice property to have, it does complicate\n> things, so can be left for follow-on work.\n>\n> Also in v5:\n>\n> - It mentions the new command in the error hint in\n> GetNewTransactionId(). I'm not sure if multi-word commands should be\n> quoted like this.\n> - A first draft of documentation\n\nThank you for updating the patch.\n\nI have a few questions and comments:\n\n+ The only other option that may be combined with\n<literal>VERBOSE</literal>, although in single-user mode no client\nmessages are\n+ output.\n\nGiven VERBOSE with EMERGENCY can work only in multi-user mode, why\nonly VERBOSE can be specified with EMERGENCY? I think the same is true\nfor other options like PARALLEL; PARALLEL can work only in multi-user\nmode.\n\n---\n+ It performs a database-wide vacuum on tables, toast tables, and\nmaterialized views whose\n+ xid age or mxid age is older than 1 billion.\n\nDo we need to allow the user to specify the threshold or need a higher\nvalue (at least larger than 1.6 billion, default value of\nvacuum_failsafe_age)? I imagined a case where there are a few very-old\ntables (say 2 billion old) and many tables that are older than 1\nbillion. In this case, VACUUM (EMERGENCY) would take a long time to\ncomplete. But to minimize the downtime, we might want to run VACUUM\n(EMERGENCY) on only the very-old tables, start the cluster in\nmulti-user mode, and run vacuum on multiple tables in parallel.\n\n---\n+ if (params->options & VACOPT_EMERGENCY)\n+ {\n+ /*\n+ * Only consider relations able to hold unfrozen XIDs (anything else\n+ * should have InvalidTransactionId in relfrozenxid anyway).\n+ */\n+ if (classForm->relkind != RELKIND_RELATION &&\n+ classForm->relkind != RELKIND_MATVIEW &&\n+ classForm->relkind != RELKIND_TOASTVALUE)\n+ {\n+ Assert(!TransactionIdIsValid(classForm->relfrozenxid));\n+ Assert(!MultiXactIdIsValid(classForm->relminmxid));\n+ continue;\n+ }\n+\n+ table_xid_age = DirectFunctionCall1(xid_age,\nclassForm->relfrozenxid);\n+ table_mxid_age = DirectFunctionCall1(mxid_age,\nclassForm->relminmxid);\n+\n\nI think that instead of calling xid_age and mxid_age for each\nrelation, we can compute the thresholds for xid and mxid once, and\nthen compare them to relation's relfrozenxid and relminmxid.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 3 Feb 2022 17:14:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 3:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> + The only other option that may be combined with\n> <literal>VERBOSE</literal>, although in single-user mode no client\n> messages are\n> + output.\n>\n> Given VERBOSE with EMERGENCY can work only in multi-user mode, why\n> only VERBOSE can be specified with EMERGENCY? I think the same is true\n> for other options like PARALLEL; PARALLEL can work only in multi-user\n> mode.\n\nYou are right; it makes sense to allow options that would be turned\noff automatically in single-user mode. Even if we don't expect it to\nbe used in normal mode, the restrictions should make sense. Also,\nmaybe documenting the allowed combinations is a distraction in the\nmain entry and should be put in the notes at the bottom.\n\n> + It performs a database-wide vacuum on tables, toast tables, and\n> materialized views whose\n> + xid age or mxid age is older than 1 billion.\n>\n> Do we need to allow the user to specify the threshold or need a higher\n> value (at least larger than 1.6 billion, default value of\n> vacuum_failsafe_age)? I imagined a case where there are a few very-old\n> tables (say 2 billion old) and many tables that are older than 1\n> billion. In this case, VACUUM (EMERGENCY) would take a long time to\n> complete.\n\nI still don't think fine-tuning is helpful here. Shutdown vacuum\nshould be just as trivial to run as it is now, but with better\nbehavior. I believe a user knowledgeable enough to come up with the\nbest number is unlikely to get in this situation in the first place.\nI'm also not sure a production support engineer would (or should)\nimmediately figure out a better number than a good default. That said,\nthe 1 billion figure was a suggestion from Peter G. upthread, and a\nhigher number could be argued.\n\n> But to minimize the downtime, we might want to run VACUUM\n> (EMERGENCY) on only the very-old tables, start the cluster in\n> multi-user mode, and run vacuum on multiple tables in parallel.\n\nThat's exactly the idea. Also, back in normal mode, we can start\nstreaming WAL again. However, we don't want to go back online so close\nto the limit that we risk shutdown again. People have a reasonable\nexpectation that if you fix an emergency, it's now fixed and the\napplication can go back online. Falling down repeatedly, or worrying\nif it's possible, is very bad.\n\n> + if (params->options & VACOPT_EMERGENCY)\n> + {\n> + /*\n> + * Only consider relations able to hold unfrozen XIDs (anything else\n> + * should have InvalidTransactionId in relfrozenxid anyway).\n> + */\n> + if (classForm->relkind != RELKIND_RELATION &&\n> + classForm->relkind != RELKIND_MATVIEW &&\n> + classForm->relkind != RELKIND_TOASTVALUE)\n> + {\n> + Assert(!TransactionIdIsValid(classForm->relfrozenxid));\n> + Assert(!MultiXactIdIsValid(classForm->relminmxid));\n> + continue;\n> + }\n> +\n> + table_xid_age = DirectFunctionCall1(xid_age,\n> classForm->relfrozenxid);\n> + table_mxid_age = DirectFunctionCall1(mxid_age,\n> classForm->relminmxid);\n> +\n>\n> I think that instead of calling xid_age and mxid_age for each\n> relation, we can compute the thresholds for xid and mxid once, and\n> then compare them to relation's relfrozenxid and relminmxid.\n\nThat sounds like a good idea if it's simple to implement, so I will\ntry it. If it adds complexity, I don't think it's worth it. Scanning a\nfew thousand rows in pg_class along with the function calls is tiny\ncompared to the actual vacuum work.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 10:49:00 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Thinking further about the use of emergency mode, we have this:\n\n\"If for some reason autovacuum fails to clear old XIDs from a table,\nthe system will begin to emit warning messages like this when the\ndatabase's oldest XIDs reach forty million transactions from the\nwraparound point:\n\nWARNING: database \"mydb\" must be vacuumed within 39985967 transactions\nHINT: To avoid a database shutdown, execute a database-wide VACUUM in\nthat database.\n\"\n\nIt seems people tend not to see these warnings if they didn't already\nhave some kind of monitoring which would prevent them from getting\nhere in the first place. But if they do, the hint should mention the\nemergency option here, too. This puts Justin's idea upthread in a new\nlight -- if the admin does notice this warning, then emergency mode\nshould indeed vacuum the oldest tables first, since autovacuum is not\n(yet) smart enough to do that. I'll pursue that as a follow-up.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 11:18:43 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Feb 01, 2022 at 04:50:31PM -0500, John Naylor wrote:\n> On Thu, Jan 27, 2022 at 8:28 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > I'm sure you meant \"&\" here (fixed in attached patch to appease the cfbot):\n> > + if (options | VACOPT_MINIMAL)\n> \n> Thanks for catching that! That copy-pasto was also masking my failure\n> to process the option properly -- fixed in the attached as v5.\n\nThank the cfbot ;)\n\nActually, your most recent patch failed again without this:\n\n- if (params->VACOPT_EMERGENCY)\n+ if (params->options & VACOPT_EMERGENCY)\n\n> - It mentions the new command in the error hint in\n> GetNewTransactionId(). I'm not sure if multi-word commands should be\n> quoted like this.\n\nUse <literal> ?\n\n> + xid age or mxid age is older than 1 billion. To complete as quickly as possible, an emergency\n> + vacuum will skip truncation and index cleanup, and will skip toast tables whose age has not\n> + exceeded the cutoff.\n\nWhy does this specially mention toast tables ?\n\n> + While this option could be used while the postmaster is running, it is expected that the wraparound\n> + failsafe mechanism will automatically work in the same way to prevent imminent shutdown.\n> + When <literal>EMERGENCY</literal> is specified no tables may be listed, since it is designed to\n\nspecified comma\n\n> + select candidate relations from the entire database.\n> + The only other option that may be combined with <literal>VERBOSE</literal>, although in single-user mode no client messages are\n\nthis is missing a word?\nMaybe say: May not be combined with any other option, other than VERBOSE.\n\nShould the docs describe that the vacuum is done with cost based delay disabled\nand with vacuum_freeze_table_age=0 (and other settings).\n\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\t\terrmsg(\"option \\\"%s\\\" is incompatible with EMERGENCY\", opt->defname),\n> +\t\t\t\t\t\t\t\tparser_errposition(pstate, opt->location)));\n\nIMO new code should avoid using the outer parenthesis around errcode().\n\nMaybe the errmsg should say: .. may not be specified with EMERGENCY.\nEMERGENCY probably shouldn't be part of the translatable string.\n\n+ if (strcmp(opt->defname, \"emergency\") != 0 &&\n+ strcmp(opt->defname, \"verbose\") != 0 &&\n+ defGetBoolean(opt))\n\nIt's wrong to call getBoolean(), since the options may not be boolean.\npostgres=# VACUUM(EMERGENCY, INDEX_CLEANUP auto);\nERROR: index_cleanup requires a Boolean value\n\nI think EMERGENCY mode should disable process_toast. It already processes\ntoast tables separately. See 003.\n\nShould probably exercise (EMERGENCY) in vacuum.sql. See 003.\n\n> > I still wonder if the relations should be processed in order of decreasing age.\n> > An admin might have increased autovacuum_freeze_max_age up to 2e9, and your\n> > query might return thousands of tables, with a wide range of sizes and ages.\n> \n> While that seems like a nice property to have, it does complicate\n> things, so can be left for follow-on work.\n\nI added that in the attached 003.\n\n-- \nJustin", "msg_date": "Thu, 3 Feb 2022 11:29:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 8:56 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we should move *away* from single user mode, rather than the\n> opposite. It's a substantial code burden and it's hard to use.\n\nYes. This thread seems to be largely devoted to the topic of making\nsingle-user vacuum work better, but I don't see anyone asking the\nquestion \"why do we have a message that tells people to vacuum in\nsingle user mode in the first place?\". It's basically bad advice, with\none small exception that I'll talk about in a minute. Suppose we had a\nmessage in the tree that said \"HINT: Consider angering a live anaconda\nto fix this problem.\" If that were so, the correct thing to do\nwouldn't be to add a section to our documentation explaining how to\ndeal with angry anacondas. The correct thing to do would be to remove\nthe hint as bad advice that we never should have offered in the first\nplace. And so here. We should not try to make vacuum in single\nuser-mode work better or differently, or at least that shouldn't be\nour primary objective. We should just stop telling people to do it. We\nshould probably add messages and documentation *discouraging* the use\nof single user mode for recovering from wraparound trouble, exactly\nthe opposite of what we do now. There's nothing we can do in\nsingle-user mode that we can't do equally well in multi-user mode. If\npeople try to fix wraparound problems in multi-user mode, they still\nhave read-only access to their database, they can use parallelism,\nthey can use command line utilities like vacuumdb, and they can use\npsql which has line editing and allows remote access and is a way\nnicer user experience than running postgres --single. We need a really\ncompelling reason to tell people to give up all those advantages, and\nthere is no such reason. It makes just as much sense as telling people\nto deal with wraparound problems by angering a live anaconda.\n\nI did say there was an exception, and it's this: the last time I\nstudied this issue back in 2019,[1] vacuum insisted on trying to\ntruncate tables even when the system is in wraparound danger. Then it\nwould fail, because truncating the table required allocating an XID,\nwhich would fail if we were short on XIDs. By putting the system in\nsingle user mode, you could continue to allocate XIDs and thus VACUUM\nwould work. However, if you think about this for even 10 seconds, you\ncan see that it's terrible. If we're so short of XIDs that we are\nscared to allocate them for fear of causing an actual wraparound,\nputting the system into a mode where that protection is bypassed is a\nsuper-terrible idea. People will be able to run vacuum, yes, but if\nthey have too many tables, they will actually experience wraparound\nand thus data loss before they process all the tables they have. What\nwe ought to do to solve this problem is NOT TRUNCATE when the number\nof remaining XIDs is small, so that we don't consume any of the\nremaining XIDs until we get the system out of wraparound danger. I\nthink the \"failsafe\" stuff Peter added in v14 fixes that, though. If\nnot, we should adjust it so it does. And then we should KILL WITH FIRE\nthe message telling people to use single user mode -- and once we do\nthat, the question of what the behavior ought to be when someone does\nrun VACUUM in single user mode becomes a lot less important.\n\nThis problem is basically self-inflicted. We have given people bad\nadvice (use single user mode) and then they suffer when they take it.\nAmeliorating the suffering isn't the worst idea ever, but it's\nbasically fixing the wrong problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] http://postgr.es/m/CA+Tgmob1QCMJrHwRBK8HZtGsr+6cJANRQw2mEgJ9e=D+z7cOsw@mail.gmail.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:05:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 1:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 8:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should move *away* from single user mode, rather than the\n> > opposite. It's a substantial code burden and it's hard to use.\n>\n> Yes. This thread seems to be largely devoted to the topic of making\n> single-user vacuum work better, but I don't see anyone asking the\n> question \"why do we have a message that tells people to vacuum in\n> single user mode in the first place?\". It's basically bad advice, with\n> one small exception that I'll talk about in a minute.\n\nThe word \"advice\" sounds like people have a choice, rather than the\nsystem not accepting commands anymore. It would be much less painful\nif the system closed connections and forbade all but superusers to\nconnect, but that sounds like a lot of work. (happy to be proven\notherwise)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:34:28 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 1:34 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> The word \"advice\" sounds like people have a choice, rather than the\n> system not accepting commands anymore. It would be much less painful\n> if the system closed connections and forbade all but superusers to\n> connect, but that sounds like a lot of work. (happy to be proven\n> otherwise)\n\nThey *do* have a choice. They can continue to operate the system in\nmulti-user mode, they can have read access to their data, and they can\nrun VACUUM and other non-XID-allocating commands to fix the issue.\nSure, their application can't run commands that allocate XIDs, but\nit's not going to be able to do that if they go to single-user mode\neither.\n\nI don't understand why we would want the system to stop accepting\nconnections other than superuser connections. That would provide\nstrictly less functionality and I don't understand what it would gain.\nBut it would still be better than going into single-user mode, which\nprovides even less functionality and has basically no advantages of\nany kind.\n\nWhy are you convinced that the user HAS to go to single-user mode? I\ndon't think they have to do that, and I don't think they should want\nto do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:42:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 1:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 3, 2022 at 1:34 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> > The word \"advice\" sounds like people have a choice, rather than the\n> > system not accepting commands anymore. It would be much less painful\n> > if the system closed connections and forbade all but superusers to\n> > connect, but that sounds like a lot of work. (happy to be proven\n> > otherwise)\n>\n> They *do* have a choice. They can continue to operate the system in\n> multi-user mode, they can have read access to their data, and they can\n> run VACUUM and other non-XID-allocating commands to fix the issue.\n> Sure, their application can't run commands that allocate XIDs, but\n> it's not going to be able to do that if they go to single-user mode\n> either.\n\nI just checked some client case notes where they tried just that\nbefore getting outside help, and both SELECT and VACUUM FREEZE\ncommands were rejected. The failure is clearly indicated in the log.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 16:18:27 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 13:42:20 -0500, Robert Haas wrote:\n> They *do* have a choice. They can continue to operate the system in\n> multi-user mode, they can have read access to their data, and they can\n> run VACUUM and other non-XID-allocating commands to fix the issue.\n> Sure, their application can't run commands that allocate XIDs, but\n> it's not going to be able to do that if they go to single-user mode\n> either.\n\nI wonder if we shouldn't add some exceptions to the xid allocation\nprevention. It makes sense that we don't allow random DML. But it's e.g. often\nmore realistic to drop / truncate a few tables with unimportant content,\nrather than spend the time vacuuming those. We could e.g. allow xid\nconsumption within VACUUM, TRUNCATE, DROP TABLE / INDEX when run at the top\nlevel for longer than we allow it for anything else.\n\n\n> But it would still be better than going into single-user mode, which\n> provides even less functionality and has basically no advantages of\n> any kind.\n\nIndeed. Single user is the worst response to this (and just about anything\nelse, really). Even just getting into the single user mode takes a while\n(shutdown checkpoint). The user interface is completely different (and\nawful). The buffer cache is completely cold. The system is slower because\nthere's no wal writer / checkpointer running. Which basically is a list of\nthings one absolutely do not wants when confronted with a wraparound\nsituation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:50:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 4:18 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> I just checked some client case notes where they tried just that\n> before getting outside help, and both SELECT and VACUUM FREEZE\n> commands were rejected. The failure is clearly indicated in the log.\n\nIt would be helpful to know how it failed - what was the error? And\nthen I think we should just fix whatever the problem is. As I said\nbefore, I know TRUNCATE has been an issue in the past, and if that's\nnot already fixed in v14, we should. If there's other stuff, we should\nfix that too. The point I'm making here, which I still believe to be\nvalid, is that there's nothing intrinsically better about being in\nsingle user mode. In fact, it's clearly worse. And I don't think it's\nhard to fix it so that we avoid people needing to do that in the first\nplace.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 16:58:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 16:18:27 -0500, John Naylor wrote:\n> I just checked some client case notes where they tried just that\n> before getting outside help, and both SELECT and VACUUM FREEZE\n> commands were rejected.\n\nWhat kind of SELECT was that? Any chance it caused a write via functions, a\nview, whatnot? And what version? What was the exact error message?\n\n\nVACUUM FREEZE is a *terrible* idea to run when encountering anti-wraparound\nissues. I understand why people thing key might need it, but basically all it\nachieves is to make VACUUM do a lot more, none of it helpful to get out of the\nwraparound-can't-write situation (those rows will already get frozen).\n\nI'd plus one the addition of a HINT that tells users that FREEZE likely is a\nbad idea when in wraparound land. We should allow it, because there are\nsituation where it might make sense, but the people that can make that\njudgement know they can ignore the HINT.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:58:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 4:50 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if we shouldn't add some exceptions to the xid allocation\n> prevention. It makes sense that we don't allow random DML. But it's e.g. often\n> more realistic to drop / truncate a few tables with unimportant content,\n> rather than spend the time vacuuming those. We could e.g. allow xid\n> consumption within VACUUM, TRUNCATE, DROP TABLE / INDEX when run at the top\n> level for longer than we allow it for anything else.\n\nTrue, although we currently don't start refusing XID allocation\naltogether until only 1 million remain, IIRC. And that's cutting it\nreally close if we need to start consuming 1 XID per table we need to\ndrop. We might need to push out some of the thresholds a bit.\n\nFor the most part, I think that there's no reason why autovacuum\nshouldn't be able to recover from this situation automatically, as\nlong as old replication slots and prepared transactions are cleaned up\nand any old transactions are killed off. I don't think we're very far\nfrom that Just Working, but we are not all there yet either. Manual\nintervention to drop tables etc. is reasonable to allow a bit more\nthan we do now, but the big problem IMO is that the behavior when we\nrun short of XIDs has had very little testing and bug fixing, so\nthings that don't really need to break just do anyway.\n\n> Indeed. Single user is the worst response to this (and just about anything\n> else, really). Even just getting into the single user mode takes a while\n> (shutdown checkpoint). The user interface is completely different (and\n> awful). The buffer cache is completely cold. The system is slower because\n> there's no wal writer / checkpointer running. Which basically is a list of\n> things one absolutely do not wants when confronted with a wraparound\n> situation.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 17:02:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 4:58 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-03 16:18:27 -0500, John Naylor wrote:\n> > I just checked some client case notes where they tried just that\n> > before getting outside help, and both SELECT and VACUUM FREEZE\n> > commands were rejected.\n>\n> What kind of SELECT was that? Any chance it caused a write via functions, a\n> view, whatnot? And what version? What was the exact error message?\n\nLooking closer, there is a function defined by an extension. I'd have\nto dig further to see if writes happen. The error is exactly what\nwe've been talking about:\n\n2022-01-03 22:03:23 PST ERROR: database is not accepting commands to\navoid wraparound data loss in database \"<redacted>\"\n2022-01-03 22:03:23 PST HINT: Stop the postmaster and vacuum that\ndatabase in single-user mode. You might also need to commit or roll\nback old prepared transactions.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 17:07:48 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 5:08 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Looking closer, there is a function defined by an extension. I'd have\n> to dig further to see if writes happen. The error is exactly what\n> we've been talking about:\n>\n> 2022-01-03 22:03:23 PST ERROR: database is not accepting commands to\n> avoid wraparound data loss in database \"<redacted>\"\n> 2022-01-03 22:03:23 PST HINT: Stop the postmaster and vacuum that\n> database in single-user mode. You might also need to commit or roll\n> back old prepared transactions.\n\nThat error comes from GetNewTransactionId(), so that function must\neither try to execute DML or do something else which causes an XID to\nbe assigned. I think a plain SELECT should work just fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 19:30:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 17:02:15 -0500, Robert Haas wrote:\n> On Thu, Feb 3, 2022 at 4:50 PM Andres Freund <andres@anarazel.de> wrote:\n> > I wonder if we shouldn't add some exceptions to the xid allocation\n> > prevention. It makes sense that we don't allow random DML. But it's e.g. often\n> > more realistic to drop / truncate a few tables with unimportant content,\n> > rather than spend the time vacuuming those. We could e.g. allow xid\n> > consumption within VACUUM, TRUNCATE, DROP TABLE / INDEX when run at the top\n> > level for longer than we allow it for anything else.\n>\n> True, although we currently don't start refusing XID allocation\n> altogether until only 1 million remain, IIRC. And that's cutting it\n> really close if we need to start consuming 1 XID per table we need to\n> drop. We might need to push out some of the thresholds a bit.\n\nYea, I'd have no problem leaving the \"hard\" limit somewhere closer to 1\nmillion (although 100k should be just as well), but introduce a softer \"only\nvacuum/drop/truncate\" limit a good bit before that.\n\n\n> For the most part, I think that there's no reason why autovacuum\n> shouldn't be able to recover from this situation automatically, as\n> long as old replication slots and prepared transactions are cleaned up\n> and any old transactions are killed off.\n\nTo address the \"as long as\" part: I think that describing better what is\nholding back the horizon would be a significant usability improvement.\n\nImagine that instead of the generic hints in these messages:\n\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n\t\t\t\t\t\t errmsg(\"database is not accepting commands to avoid wraparound data loss in database \\\"%s\\\"\",\n\t\t\t\t\t\t\t\toldest_datname),\n\t\t\t\t\t\t errhint(\"Stop the postmaster and vacuum that database in single-user mode.\\n\"\n\t\t\t\t\t\t\t\t \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\nand\n\t\tereport(WARNING,\n\t\t\t\t(errmsg(\"oldest xmin is far in the past\"),\n\t\t\t\t errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"\n\t\t\t\t\t\t \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n\nwe'd actually tell the user a bit more what about what is causing the\nproblem.\n\nWe can compute the:\n1) oldest slot by xmin, with name\n2) oldest walsender by xmin, with pid\n3) oldest prepared transaction id by xid / xmin, with name\n4) oldest in-progress transaction id by xid / xmin, with name\n5) oldest database datfrozenxid, with database name\n\nIf 1-4) are close to 5), there's no point in trying to vacuum aggressively, it\nwon't help. So we instead can say that the xmin horizon (with a better name)\nis held back by the oldest of these, with enough identifying information for\nthe user to actually know where to look.\n\nIn contrast, if 5) is older than 1-4), then we can tell the user which\ndatabase is the problem, as we do right now, but we can stop mentioning the\n\"You might also need to commit ...\" bit.\n\n\nAlso, adding an SRF providing the above in a useful format would be great for\nmonitoring and for \"remote debugging\" of problems.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Feb 2022 17:35:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, I'd have no problem leaving the \"hard\" limit somewhere closer to 1\n> million (although 100k should be just as well), but introduce a softer \"only\n> vacuum/drop/truncate\" limit a good bit before that.\n\n+1.\n\n> To address the \"as long as\" part: I think that describing better what is\n> holding back the horizon would be a significant usability improvement.\n>\n> Imagine that instead of the generic hints in these messages:\n> ereport(ERROR,\n> (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> errmsg(\"database is not accepting commands to avoid wraparound data loss in database \\\"%s\\\"\",\n> oldest_datname),\n> errhint(\"Stop the postmaster and vacuum that database in single-user mode.\\n\"\n> \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n> and\n> ereport(WARNING,\n> (errmsg(\"oldest xmin is far in the past\"),\n> errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"\n> \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n>\n> we'd actually tell the user a bit more what about what is causing the\n> problem.\n>\n> We can compute the:\n> 1) oldest slot by xmin, with name\n> 2) oldest walsender by xmin, with pid\n> 3) oldest prepared transaction id by xid / xmin, with name\n> 4) oldest in-progress transaction id by xid / xmin, with name\n> 5) oldest database datfrozenxid, with database name\n>\n> If 1-4) are close to 5), there's no point in trying to vacuum aggressively, it\n> won't help. So we instead can say that the xmin horizon (with a better name)\n> is held back by the oldest of these, with enough identifying information for\n> the user to actually know where to look.\n\nYes. This kind of thing strikes me as potentially a huge help. To\nrephrase that in other terms, we could tell the user what the actual\nproblem is instead of suggesting to them that they shut down their\ndatabase just for fun. It's \"just for fun\" because (a) it typically\nwon't fix the real problem, which is most often (1) or (3) from your\nlist, and even if it's (2) or (4) they could just kill the session\ninstead of shutting down the whole database, and (b) no matter what\nneeds to be done, whether it's VACUUM or ROLLBACK PREPARED or\nsomething else, they may as well do that thing in multi-user mode\nrather than single-user mode, unless we as PostgreSQL developers\nforgot to make that actually work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 21:08:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 21:08:03 -0500, Robert Haas wrote:\n> On Thu, Feb 3, 2022 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > We can compute the:\n> > 1) oldest slot by xmin, with name\n> > 2) oldest walsender by xmin, with pid\n> > 3) oldest prepared transaction id by xid / xmin, with name\n> > 4) oldest in-progress transaction id by xid / xmin, with name\n> > 5) oldest database datfrozenxid, with database name\n> >\n> > If 1-4) are close to 5), there's no point in trying to vacuum aggressively, it\n> > won't help. So we instead can say that the xmin horizon (with a better name)\n> > is held back by the oldest of these, with enough identifying information for\n> > the user to actually know where to look.\n> \n> Yes. This kind of thing strikes me as potentially a huge help. To\n> rephrase that in other terms, we could tell the user what the actual\n> problem is instead of suggesting to them that they shut down their\n> database just for fun. It's \"just for fun\" because (a) it typically\n> won't fix the real problem, which is most often (1) or (3) from your\n> list, and even if it's (2) or (4) they could just kill the session\n> instead of shutting down the whole database\n\nNot that it matters, but IME the leading cause is 5). Often due to autovacuum\nconfiguration. Which reminded me of the one thing that single user mode\nis actually helpful for: Being able to start a manual VACUUM.\n\nOnce autovacuum is churning along in anti-wrap mode, with multiple workers, it\ncan be hard to manually VACUUM without waiting for autovacuum to do it's\nthrottled thing. The only way is to start the manual VACUUM and kill\nautovacuum workers whenever they're blocking the manual vacuum(s).\n\n\nWhich reminds me: Perhaps we ought to hint about reducing / removing\nautovacuum cost limits in this situation? And perhaps make autovacuum absorb\nconfig changes while running? It's annoying that an autovac halfway into a\nhuge table doesn't absorb changed cost limits for example.\n\n\n> (b) no matter what needs to be done, whether it's VACUUM or ROLLBACK\n> PREPARED or something else, they may as well do that thing in multi-user\n> mode rather than single-user mode, unless we as PostgreSQL developers forgot\n> to make that actually work.\n\nOne thing that we made quite hard is to rollback prepared transactions,\nbecause we require to be in the same database (a lot of fun in single user\nmode with a lot of databases). We can't commit in the same database, but I\nwonder if it's doable to allow rollbacks?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Feb 2022 19:26:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 03, 2022 at 07:26:01PM -0800, Andres Freund wrote:\n> Which reminds me: Perhaps we ought to hint about reducing / removing\n> autovacuum cost limits in this situation? And perhaps make autovacuum absorb\n> config changes while running? It's annoying that an autovac halfway into a\n> huge table doesn't absorb changed cost limits for example.\n\nI remembered this thread:\n\nhttps://commitfest.postgresql.org/32/2983/\n| Running autovacuum dynamic update to cost_limit and delay\n\nhttps://www.postgresql.org/message-id/flat/13A6B954-5C21-4E60-BC06-751C8EA469A0%40amazon.com\nhttps://www.postgresql.org/message-id/flat/0A3F8A3C-4328-4A4B-80CF-14CEBE0B695D%40amazon.com\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 3 Feb 2022 22:44:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 7:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> That error comes from GetNewTransactionId(), so that function must\n> either try to execute DML or do something else which causes an XID to\n> be assigned. I think a plain SELECT should work just fine.\n\nIt was indeed doing writes, so that much is not a surprise anymore.\n\nOn Thu, Feb 3, 2022 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 3, 2022 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, I'd have no problem leaving the \"hard\" limit somewhere closer to 1\n> > million (although 100k should be just as well), but introduce a softer \"only\n> > vacuum/drop/truncate\" limit a good bit before that.\n>\n> +1.\n\nSince there seems to be agreement on this, I can attempt a stab at it,\nbut it'll be another week before I can do so.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 17:56:45 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Fri, Feb 4, 2022 at 4:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> As I said\n> before, I know TRUNCATE has been an issue in the past, and if that's\n> not already fixed in v14, we should. If there's other stuff, we should\n> fix that too.\n\nThe failsafe mode does disable truncation as of v14:\n\ncommit 60f1f09ff44308667ef6c72fbafd68235e55ae27\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Tue Apr 13 12:58:31 2021 -0700\n\nDon't truncate heap when VACUUM's failsafe is in effect.\n--\n\nTo demonstrate to myself, I tried a few vacuums in a debugger session\nwith a breakpoint at GetNewTransactionId(). I've only seen it reach\nhere when heap truncation happens (or the not relevant for wraparound\nsituations FULL and ANALYZE).\n\nWith the maximum allowable setting of autovacuum_freeze_max_age of 2\nbillion, the highest allowable vacuum_failsafe_age is 2.1 billion, so\nheap truncation will be shut off before the warnings start.\n\n> And then we should KILL WITH FIRE\n> the message telling people to use single user mode -- and once we do\n> that, the question of what the behavior ought to be when someone does\n> run VACUUM in single user mode becomes a lot less important.\n\nOkay, so it sounds like changing the message is enough for v15? The\nother two things mentioned are nice-to-haves, but wouldn't need to\nhold back this minimal change, it seems:\n\nOn Fri, Feb 4, 2022 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n\n> I wonder if we shouldn't add some exceptions to the xid allocation\n> prevention. It makes sense that we don't allow random DML. But it's e.g. often\n> more realistic to drop / truncate a few tables with unimportant content,\n> rather than spend the time vacuuming those. We could e.g. allow xid\n> consumption within VACUUM, TRUNCATE, DROP TABLE / INDEX when run at the top\n> level for longer than we allow it for anything else.\n\nIt seems like this would require having access to \"nodetag(parsetree)\"\nof the statement available in GetNewTransactionId. I don't immediately\nsee an easy way to do that...is a global var within the realm of\nacceptability?\n\nOn Fri, Feb 4, 2022 at 8:35 AM Andres Freund <andres@anarazel.de> wrote:\n\n> we'd actually tell the user a bit more what about what is causing the\n> problem.\n>\n> We can compute the:\n> 1) oldest slot by xmin, with name\n> 2) oldest walsender by xmin, with pid\n> 3) oldest prepared transaction id by xid / xmin, with name\n> 4) oldest in-progress transaction id by xid / xmin, with name\n> 5) oldest database datfrozenxid, with database name\n[...]\n> Also, adding an SRF providing the above in a useful format would be great for\n> monitoring and for \"remote debugging\" of problems.\n\nI concur it sounds very useful, and not terribly hard, but probably a\nv16 project.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 11:04:46 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Mon, Feb 14, 2022 at 8:04 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> The failsafe mode does disable truncation as of v14:\n>\n> commit 60f1f09ff44308667ef6c72fbafd68235e55ae27\n> Author: Peter Geoghegan <pg@bowt.ie>\n> Date: Tue Apr 13 12:58:31 2021 -0700\n>\n> Don't truncate heap when VACUUM's failsafe is in effect.\n\nThat's true, but bear in mind that it only does so when the specific\ntable being vacuumed actually triggers the failsafe. I believe that\nVACUUM(EMERGENCY) doesn't just limit itself to vacuuming tables where\nthis is guaranteed (or even likely). If I'm not mistaken, it's\npossible (even likely) that there will be a table whose\nage(relfrozenxid) is high enough for VACUUM(EMERGENCY) to target the\ntable, and yet not so high that the failsafe will kick in at the\nearliest opportunity.\n\n> To demonstrate to myself, I tried a few vacuums in a debugger session\n> with a breakpoint at GetNewTransactionId(). I've only seen it reach\n> here when heap truncation happens (or the not relevant for wraparound\n> situations FULL and ANALYZE).\n\nIt's possible for a manually issued VACUUM to directly disable\ntruncation (same with index_cleanup). Without getting into the\nquestion of what the ideal behavior might be right now, I can say for\nsure that it wouldn't be difficult to teach VACUUM(EMERGENCY) to pass\ndown the same options.\n\nThe failsafe is essentially a mechanism that dynamically changes these\noptions for an ongoing vacuum, once age(relfrozenxid) crosses a\ncertain threshold. There is nothing fundamentally special about that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:21:41 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Feb 15, 2022 at 11:22 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 14, 2022 at 8:04 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > The failsafe mode does disable truncation as of v14:\n> >\n> > commit 60f1f09ff44308667ef6c72fbafd68235e55ae27\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > Date: Tue Apr 13 12:58:31 2021 -0700\n> >\n> > Don't truncate heap when VACUUM's failsafe is in effect.\n>\n> That's true, but bear in mind that it only does so when the specific\n> table being vacuumed actually triggers the failsafe. I believe that\n> VACUUM(EMERGENCY) doesn't just limit itself to vacuuming tables where\n> this is guaranteed (or even likely). If I'm not mistaken, it's\n> possible (even likely) that there will be a table whose\n> age(relfrozenxid) is high enough for VACUUM(EMERGENCY) to target the\n> table, and yet not so high that the failsafe will kick in at the\n> earliest opportunity.\n\nWell, the point of inventing this new vacuum mode was because I\nthought that upon reaching xidStopLimit, we couldn't issue commands,\nperiod, under the postmaster. If it was easier to get a test instance\nto xidStopLimit, I certainly would have discovered this sooner. When\nAndres wondered about getting away from single user mode, I assumed\nthat would involve getting into areas too deep to tackle for v15. As\nRobert pointed out, lazy_truncate_heap is the only thing that can't\nhappen for vacuum at this point, and fully explains why in versions <\n14 our client's attempts to vacuum resulted in error. Since the\nfailsafe mode turns off truncation, vacuum should now *just work* near\nwraparound. If there is any doubt, we can tighten the check for\nentering failsafe.\n\nNow, it's certainly possible that autovacuum is either not working at\nall because of something broken, or is not working on the oldest\ntables at the moment, so one thing we could do is to make VACUUM [with\nno tables listed] get the tables from pg_class in reverse order of\nmax(xid age, mxid age). That way, the horizon will eventually pull\nback over time and the admin can optionally cancel the vacuum at some\npoint. Since the order is harmless when it's not needed, we can do\nthat unconditionally.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 13:04:42 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Feb 15, 2022 at 1:04 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Well, the point of inventing this new vacuum mode was because I\n> thought that upon reaching xidStopLimit, we couldn't issue commands,\n> period, under the postmaster. If it was easier to get a test instance\n> to xidStopLimit, I certainly would have discovered this sooner. When\n> Andres wondered about getting away from single user mode, I assumed\n> that would involve getting into areas too deep to tackle for v15. As\n> Robert pointed out, lazy_truncate_heap is the only thing that can't\n> happen for vacuum at this point, and fully explains why in versions <\n> 14 our client's attempts to vacuum resulted in error. Since the\n> failsafe mode turns off truncation, vacuum should now *just work* near\n> wraparound. If there is any doubt, we can tighten the check for\n> entering failsafe.\n\n+1 to all of that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 09:16:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Mon, Feb 14, 2022 at 10:04 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Well, the point of inventing this new vacuum mode was because I\n> thought that upon reaching xidStopLimit, we couldn't issue commands,\n> period, under the postmaster. If it was easier to get a test instance\n> to xidStopLimit, I certainly would have discovered this sooner.\n\nI did notice from my own testing of the failsafe (by artificially\ninducing wraparound failure using an XID burning C function) that\nautovacuum seemed to totally correct the problem, even when the system\nhad already crossed xidStopLimit - it came back on its own. I wasn't\ncompletely sure of how robust this effect was, though.\n\n> When\n> Andres wondered about getting away from single user mode, I assumed\n> that would involve getting into areas too deep to tackle for v15. As\n> Robert pointed out, lazy_truncate_heap is the only thing that can't\n> happen for vacuum at this point, and fully explains why in versions <\n> 14 our client's attempts to vacuum resulted in error. Since the\n> failsafe mode turns off truncation, vacuum should now *just work* near\n> wraparound. If there is any doubt, we can tighten the check for\n> entering failsafe.\n\nObviously having to enter single user mode is horrid. If we can\nreasonably update the advice to something more reasonable now, then\nthat would help users that find themselves in this situation a great\ndeal.\n\n> Now, it's certainly possible that autovacuum is either not working at\n> all because of something broken, or is not working on the oldest\n> tables at the moment, so one thing we could do is to make VACUUM [with\n> no tables listed] get the tables from pg_class in reverse order of\n> max(xid age, mxid age). That way, the horizon will eventually pull\n> back over time and the admin can optionally cancel the vacuum at some\n> point. Since the order is harmless when it's not needed, we can do\n> that unconditionally.\n\nMy ongoing work on freezing/relfrozenxid tends to make the age of\nrelfrozenxid much more indicative of the amount of work that VACUUM\nwould have to do when run -- not limited to freezing. You could\nprobably do this anyway, but it's nice that that'll be true.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Feb 2022 09:28:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Tue, Feb 15, 2022 at 9:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Feb 14, 2022 at 10:04 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > Well, the point of inventing this new vacuum mode was because I\n> > thought that upon reaching xidStopLimit, we couldn't issue commands,\n> > period, under the postmaster. If it was easier to get a test instance\n> > to xidStopLimit, I certainly would have discovered this sooner.\n>\n> I did notice from my own testing of the failsafe (by artificially\n> inducing wraparound failure using an XID burning C function) that\n> autovacuum seemed to totally correct the problem, even when the system\n> had already crossed xidStopLimit - it came back on its own. I wasn't\n> completely sure of how robust this effect was, though.\n\nIt seemed worth noting this in comments above\nshould_attempt_truncation(). Pushed a commit to do that just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Feb 2022 15:17:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 6:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Feb 15, 2022 at 9:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> > I did notice from my own testing of the failsafe (by artificially\n> > inducing wraparound failure using an XID burning C function) that\n> > autovacuum seemed to totally correct the problem, even when the system\n> > had already crossed xidStopLimit - it came back on its own. I wasn't\n> > completely sure of how robust this effect was, though.\n\nI'll put some effort in finding any way that it might not be robust.\nAfter that, changing the message and docs is trivial.\n\n> It seemed worth noting this in comments above\n> should_attempt_truncation(). Pushed a commit to do that just now.\n\nThanks for that.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 15:43:12 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:43 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I'll put some effort in finding any way that it might not be robust.\n> After that, changing the message and docs is trivial.\n\nGreat, thanks John.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 08:12:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 2:29 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 14, 2022 at 10:04 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > Well, the point of inventing this new vacuum mode was because I\n> > thought that upon reaching xidStopLimit, we couldn't issue commands,\n> > period, under the postmaster. If it was easier to get a test instance\n> > to xidStopLimit, I certainly would have discovered this sooner.\n>\n> I did notice from my own testing of the failsafe (by artificially\n> inducing wraparound failure using an XID burning C function) that\n> autovacuum seemed to totally correct the problem, even when the system\n> had already crossed xidStopLimit - it came back on its own. I wasn't\n> completely sure of how robust this effect was, though.\n\nFYI, I've tested the situation that I assumed autovacuum can not\ncorrect the problem; when the system had already crossed xidStopLimit,\nit keeps failing to vacuum on tables that appear in the front of the\nlist and have sufficient garbage to trigger the truncation but are not\nolder than the failsafe limit. But contrary to my assumption, it did\ncorrect the problem since autovacuum continues to the next table in\nthe list even after an error. This probably means that autovacuum\neventually succeeds to process all tables that trigger the failsafe\nmode, ensuring advancing datfrozenxid, which is great.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 17 Feb 2022 01:47:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 8:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> FYI, I've tested the situation that I assumed autovacuum can not\n> correct the problem; when the system had already crossed xidStopLimit,\n> it keeps failing to vacuum on tables that appear in the front of the\n> list and have sufficient garbage to trigger the truncation but are not\n> older than the failsafe limit. But contrary to my assumption, it did\n> correct the problem since autovacuum continues to the next table in\n> the list even after an error. This probably means that autovacuum\n> eventually succeeds to process all tables that trigger the failsafe\n> mode, ensuring advancing datfrozenxid, which is great.\n\nRight; it seems as if the situation is much improved, even when the\nfailsafe didn't prevent the system from going over xidStopLimit. If\nautovacuum alone can bring the system back to a normal state as soon\nas possible, without a human needing to do anything special, then\nclearly the general risk is much smaller. Even this worst case\nscenario where \"the failsafe has failed\" is not so bad anymore, in\npractice. I don't think that it really matters if some concurrent\nnon-emergency VACUUMs fail when attempting to truncate the table (it's\nno worse than ANALYZE failing, for example).\n\nGood news!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 09:50:41 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Good news!\n\n+1. But I think we might want to try to write documentation around\nthis. We should explicitly tell people NOT to use single-user mode,\nbecause that stupid message has been there for a long time and a lot\nof people have probably internalized it by now. And we should also\ntell them that they SHOULD check for prepared transactions, old\nreplication slots, etc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 12:56:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 9:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> +1. But I think we might want to try to write documentation around\n> this. We should explicitly tell people NOT to use single-user mode,\n> because that stupid message has been there for a long time and a lot\n> of people have probably internalized it by now. And we should also\n> tell them that they SHOULD check for prepared transactions, old\n> replication slots, etc.\n\nAbsolutely -- couldn't agree more. Do you think it's worth targeting\n14 here, or just HEAD?\n\nI'm pretty sure that some people believe that wraparound can cause\nactual data corruption, in part because of the way the docs present\nthe information. The system won't do that, of course (precisely\nbecause of this xidStopLimit behavior). The docs make it all sound\nabsolutely terrifying, which doesn't seem proportionate to me (at\nleast not with this stuff in place, maybe not ever).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 10:14:19 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-16 10:14:19 -0800, Peter Geoghegan wrote:\n> Absolutely -- couldn't agree more. Do you think it's worth targeting\n> 14 here, or just HEAD?\n\nI'd go for HEAD first, but wouldn't protest against 14.\n\n\n> I'm pretty sure that some people believe that wraparound can cause\n> actual data corruption\n\nWell, historically they're not wrong. And we've enough things stored in 32bit\ncounters that I'd be surprised if we didn't have more wraparound issues. Of\ncourse that's not related to anti-wrap vacuums...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Feb 2022 10:18:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 10:18 AM Andres Freund <andres@anarazel.de> wrote:\n> > I'm pretty sure that some people believe that wraparound can cause\n> > actual data corruption\n>\n> Well, historically they're not wrong.\n\nTrue, but the most recent version where that's actually possible is\nPostgreSQL 8.0, which was released in early 2005. That was a very\ndifferent time for the project. I don't think that people believe that\nwraparound can cause data corruption because they remember a time when\nit really could. It seems like general confusion to me (which could\nhave been avoided).\n\nAt a minimum, we ought to be very clear on the fact that Postgres\nisn't going to just let your database become corrupt in some more or\nless predictable way. The xidStopLimit thing is pretty bad, but it's\nstill much better than that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 10:27:38 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 10:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> True, but the most recent version where that's actually possible is\n> PostgreSQL 8.0, which was released in early 2005.\n\nIt just occurred to me that the main historic reason for the single\nuser mode advice was the lack of virtual XIDs. The commit that added\nthe xidStopLimit behavior (commit 60b2444cc3) came a couple of years\nbefore the introduction of virtual transaction IDs (in commit\n295e63983d). AFAICT, the advice about single-user mode was added at a\ntime where exceeding xidStopLimit caused the system to grind to a halt\ncompletely -- even trivial SELECTs would have failed once Postgres 8.1\ncrossed the xidStopLimit limit.\n\nIt seems as if the advice about single user mode persisted for no\ngreat reason at all. Technically there were some remaining reasons to\nkeep it around (like the truncation thing), but overall these\nsecondary reasons could have been addressed much sooner if somebody\nhad thought about it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 11:18:19 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 1:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Feb 16, 2022 at 10:18 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I'm pretty sure that some people believe that wraparound can cause\n> > > actual data corruption\n> >\n> > Well, historically they're not wrong.\n>\n> True, but the most recent version where that's actually possible is\n> PostgreSQL 8.0, which was released in early 2005. That was a very\n> different time for the project. I don't think that people believe that\n> wraparound can cause data corruption because they remember a time when\n> it really could. It seems like general confusion to me (which could\n> have been avoided).\n\nNo, I think it's PostgreSQL 13, because before the vacuum failsafe\nthing you could end up truncating enough tables during vacuum\noperations to actually wrap around.\n\nAnd even in 14+, you can still do that, if you use single user mode.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 15:11:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It seems as if the advice about single user mode persisted for no\n> great reason at all. Technically there were some remaining reasons to\n> keep it around (like the truncation thing), but overall these\n> secondary reasons could have been addressed much sooner if somebody\n> had thought about it.\n\nI raised it on the list a couple of years ago, actually. I think I had\na bit of difficulty convincing people that it wasn't something we had\nto keep recommending.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 15:11:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> No, I think it's PostgreSQL 13, because before the vacuum failsafe\n> thing you could end up truncating enough tables during vacuum\n> operations to actually wrap around.\n\nWhy wouldn't the xidStopLimit thing prevent actual incorrect answers\nto queries, even on Postgres 13? Why wouldn't that be enough, even if\nwe make the most pessimistic possible assumptions?\n\nTo me it looks like it's physically impossible to advance an XID past\nxidStopLimit, unless you're in single user mode. Does your concern\nhave something to do with the actual xidStopLimit value in shared\nmemory not being sufficiently protective in practice?\n\n> And even in 14+, you can still do that, if you use single user mode.\n\nSo what you're saying is that there is *some* reason for vacuuming in\nsingle user mode after all, and so we should keep the advice about\nthat in place? :-)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 12:21:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 3:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Feb 16, 2022 at 12:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > No, I think it's PostgreSQL 13, because before the vacuum failsafe\n> > thing you could end up truncating enough tables during vacuum\n> > operations to actually wrap around.\n>\n> Why wouldn't the xidStopLimit thing prevent actual incorrect answers\n> to queries, even on Postgres 13? Why wouldn't that be enough, even if\n> we make the most pessimistic possible assumptions?\n>\n> To me it looks like it's physically impossible to advance an XID past\n> xidStopLimit, unless you're in single user mode. Does your concern\n> have something to do with the actual xidStopLimit value in shared\n> memory not being sufficiently protective in practice?\n\nNo, what I'm saying is that people running older versions routinely\nrun VACUUM in single-user mode because otherwise it fails due to the\ntruncation issue. But once they go into single-user mode they lose\nprotection.\n\n> > And even in 14+, you can still do that, if you use single user mode.\n>\n> So what you're saying is that there is *some* reason for vacuuming in\n> single user mode after all, and so we should keep the advice about\n> that in place? :-)\n\nWe could perhaps amend the text slightly, e.g. \"This is a great idea\nif you like pain.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 16:04:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 1:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> No, what I'm saying is that people running older versions routinely\n> run VACUUM in single-user mode because otherwise it fails due to the\n> truncation issue. But once they go into single-user mode they lose\n> protection.\n\nSeems logically consistent, but absurd. A Catch-22 situation if ever\nthere was one.\n\nThere might well be an element of survivorship bias here. Most VACUUM\noperations won't ever attempt truncation (speaking very generally).\nHow many times might (say) the customer that John mentioned have\naccidentally gone over xidStopLimit for just a little while, before\nthe situation corrected itself without anybody noticing? A lot of\napplications are very read-heavy, or aren't very well monitored.\n\nEventually (maybe after several years of this), some laggard\nanti-wraparound vacuum needs to truncate the relation, due to random\nhappenstance. Once that happens, the situation is bound to come to a\nhead. The user is bound to finally notice that the system has gone\nover xidStopLimit, because there is no longer any way for the problem\nto go away on its own.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 13:47:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 4:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There might well be an element of survivorship bias here. Most VACUUM\n> operations won't ever attempt truncation (speaking very generally).\n> How many times might (say) the customer that John mentioned have\n> accidentally gone over xidStopLimit for just a little while, before\n> the situation corrected itself without anybody noticing? A lot of\n> applications are very read-heavy, or aren't very well monitored.\n>\n> Eventually (maybe after several years of this), some laggard\n> anti-wraparound vacuum needs to truncate the relation, due to random\n> happenstance. Once that happens, the situation is bound to come to a\n> head. The user is bound to finally notice that the system has gone\n> over xidStopLimit, because there is no longer any way for the problem\n> to go away on its own.\n\nI think that's not really what is happening, at least not in the cases\nthat typically are brought to my attention. In those cases, the\ntypical pattern is:\n\n1. Everything is fine.\n\n2. Then the user forgets about a prepared transaction or a replication\nslot, or leaves a transaction open forever, or has some kind of\ncorruption that causes VACUUM to fall over and die every time it tries\nto run.\n\n3. The user has no idea that VACUUM is no longer advanced\nrelfrozenxid. Time passes.\n\n4. Eventually the system stops being willing to allocate new XIDs. It\ntells the user to go to single user mode. So they do.\n\n5. None of the tables in the database have been vacuumed in a long\ntime. There are a million XIDs left. How many of the tables in the\ndatabase are going to be truncate when they are vacuumed and burn one\nof the remaining XIDs? Anybody's guess, could be all or none.\n\n6. Sometimes the user decides to run VACUUM FULL instead of plain\nVACUUM because it sounds better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 21:56:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 6:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that's not really what is happening, at least not in the cases\n> that typically are brought to my attention. In those cases, the\n> typical pattern is:\n\n> 5. None of the tables in the database have been vacuumed in a long\n> time. There are a million XIDs left. How many of the tables in the\n> database are going to be truncate when they are vacuumed and burn one\n> of the remaining XIDs? Anybody's guess, could be all or none.\n\nI have to admit that this sounds way more plausible than my\nspeculative scenario. I haven't been involved in any kind of support\ncase with a customer in a *long* time, though (not by choice, mind\nyou).\n\n> 6. Sometimes the user decides to run VACUUM FULL instead of plain\n> VACUUM because it sounds better.\n\nIt's a pity that the name suggests otherwise. If only we'd named it\nsomething that suggests \"option of last resort\". Oh well.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Feb 2022 19:07:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 10:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > 6. Sometimes the user decides to run VACUUM FULL instead of plain\n> > VACUUM because it sounds better.\n>\n> It's a pity that the name suggests otherwise. If only we'd named it\n> something that suggests \"option of last resort\". Oh well.\n\nUnfortunately, such a name would also be misleading, just in a\ndifferent way. It is really not at all difficult to have a workload\nthat demands routine use of VACUUM FULL. I suppose technically it is a\nlast resort in such situations, because what would you resort to after\ntrying VF? But it's not like some kind of in-emergency-break-glass\nkind of thing, it's just the right tool for the job.\n\nSome years ago I worked with a customer who had a table that was being\nused as an update-heavy queue. I don't remember all the details any\nmore, but I think the general pattern was that they would insert rows,\nupdate them A TON, and then eventually delete them. And they got\nreally bad table bloat, because vacuum just wasn't running often\nenough to keep up. Reducing autovacuum_naptime to 15s fixed the issue,\nfortunately, but I was initially thinking that it might be completely\nunfixable, because what if they'd also been running a series\n4-minute-long reporting queries in a loop on some other table? More\nfrequent vacuuming wouldn't have helped then, because xmin would not\nhave been able to advance until the current instance of the reporting\nquery finished, and then vacuuming more often would have done nothing\nuseful. I think, anyway.\n\nThat's just one example that comes to mind. I think there are lots of\nworkloads where it's simply not possible to make VACUUM keep up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 09:51:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 03:43:12PM +0700, John Naylor wrote:\n> On Wed, Feb 16, 2022 at 6:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Tue, Feb 15, 2022 at 9:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> > > I did notice from my own testing of the failsafe (by artificially\n> > > inducing wraparound failure using an XID burning C function) that\n> > > autovacuum seemed to totally correct the problem, even when the system\n> > > had already crossed xidStopLimit - it came back on its own. I wasn't\n> > > completely sure of how robust this effect was, though.\n> \n> I'll put some effort in finding any way that it might not be robust.\n\nA VACUUM may create a not-trivially-bounded number of multixacts via\nFreezeMultiXactId(). In a cluster at multiStopLimit, completing VACUUM\nwithout error needs preparation something like:\n\n1. Kill each XID that might appear in a multixact.\n2. Resolve each prepared transaction that might appear in a multixact.\n3. Run VACUUM. At this point, multiStopLimit is blocking new multixacts from\n other commands, and the lack of running multixact members removes the need\n for FreezeMultiXactId() to create multixacts.\n\nAdding to the badness of single-user mode so well described upthread, one can\nenter it without doing (2) and then wrap the nextMXact counter.\n\n\n", "msg_date": "Sat, 19 Feb 2022 20:57:57 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "Hi,\n\nOn 2022-02-19 20:57:57 -0800, Noah Misch wrote:\n> On Wed, Feb 16, 2022 at 03:43:12PM +0700, John Naylor wrote:\n> > On Wed, Feb 16, 2022 at 6:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > On Tue, Feb 15, 2022 at 9:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > \n> > > > I did notice from my own testing of the failsafe (by artificially\n> > > > inducing wraparound failure using an XID burning C function) that\n> > > > autovacuum seemed to totally correct the problem, even when the system\n> > > > had already crossed xidStopLimit - it came back on its own. I wasn't\n> > > > completely sure of how robust this effect was, though.\n> > \n> > I'll put some effort in finding any way that it might not be robust.\n> \n> A VACUUM may create a not-trivially-bounded number of multixacts via\n> FreezeMultiXactId(). In a cluster at multiStopLimit, completing VACUUM\n> without error needs preparation something like:\n> \n> 1. Kill each XID that might appear in a multixact.\n> 2. Resolve each prepared transaction that might appear in a multixact.\n> 3. Run VACUUM. At this point, multiStopLimit is blocking new multixacts from\n> other commands, and the lack of running multixact members removes the need\n> for FreezeMultiXactId() to create multixacts.\n> \n> Adding to the badness of single-user mode so well described upthread, one can\n> enter it without doing (2) and then wrap the nextMXact counter.\n\nIf we collected the information along the lines of I proposed in the second half of\nhttps://www.postgresql.org/message-id/20220204013539.qdegpqzvayq3d4y2%40alap3.anarazel.de\nwe should be able to handle such cases more intelligently, I think?\n\nWe could e.g. add an error if FreezeMultiXactId() needs to create a new\nmultixact for a far-in-the-past xid. That's not great, of course, but if we\ninclude the precise cause (pid of backend / prepared xact name / slot name /\n...) necessitating creating a new multi, it'd still be a significant\nimprovement over the status quo.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 20 Feb 2022 14:15:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Sun, Feb 20, 2022 at 02:15:37PM -0800, Andres Freund wrote:\n> On 2022-02-19 20:57:57 -0800, Noah Misch wrote:\n> > On Wed, Feb 16, 2022 at 03:43:12PM +0700, John Naylor wrote:\n> > > On Wed, Feb 16, 2022 at 6:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > > On Tue, Feb 15, 2022 at 9:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > \n> > > > > I did notice from my own testing of the failsafe (by artificially\n> > > > > inducing wraparound failure using an XID burning C function) that\n> > > > > autovacuum seemed to totally correct the problem, even when the system\n> > > > > had already crossed xidStopLimit - it came back on its own. I wasn't\n> > > > > completely sure of how robust this effect was, though.\n> > > \n> > > I'll put some effort in finding any way that it might not be robust.\n> > \n> > A VACUUM may create a not-trivially-bounded number of multixacts via\n> > FreezeMultiXactId(). In a cluster at multiStopLimit, completing VACUUM\n> > without error needs preparation something like:\n> > \n> > 1. Kill each XID that might appear in a multixact.\n> > 2. Resolve each prepared transaction that might appear in a multixact.\n> > 3. Run VACUUM. At this point, multiStopLimit is blocking new multixacts from\n> > other commands, and the lack of running multixact members removes the need\n> > for FreezeMultiXactId() to create multixacts.\n> > \n> > Adding to the badness of single-user mode so well described upthread, one can\n> > enter it without doing (2) and then wrap the nextMXact counter.\n> \n> If we collected the information along the lines of I proposed in the second half of\n> https://www.postgresql.org/message-id/20220204013539.qdegpqzvayq3d4y2%40alap3.anarazel.de\n> we should be able to handle such cases more intelligently, I think?\n> \n> We could e.g. add an error if FreezeMultiXactId() needs to create a new\n> multixact for a far-in-the-past xid. That's not great, of course, but if we\n> include the precise cause (pid of backend / prepared xact name / slot name /\n> ...) necessitating creating a new multi, it'd still be a significant\n> improvement over the status quo.\n\nYes, exactly.\n\n\n", "msg_date": "Sun, 20 Feb 2022 15:08:10 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Sun, Feb 20, 2022 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:\n> We could e.g. add an error if FreezeMultiXactId() needs to create a new\n> multixact for a far-in-the-past xid. That's not great, of course, but if we\n> include the precise cause (pid of backend / prepared xact name / slot name /\n> ...) necessitating creating a new multi, it'd still be a significant\n> improvement over the status quo.\n\nThere are databases that have large tables (that grow and grow), and\nalso have tables that need to allocate many MultiXacts (or lots of\nmember space, at least). I strongly suspect that these are seldom the\nsame table, though.\n\nThe current inability of the system to recognize this difference seems\nlike it might be a real problem. Why should big tables that contain no\nactual MultiXactIds at all (and never contained even one) get early\nanti-wraparound VACUUMs, specifically focussed on averting MultiXact\nwraparound? I'm hoping that the patch that adds smarter tracking of\nfinal relfrozenxid/relminmxid values during VACUUM makes this less of\na problem automatically.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 20 Feb 2022 15:12:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:43 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I'll put some effort in finding any way that it might not be robust.\n> After that, changing the message and docs is trivial.\n\nIt would be great to be able to totally drop the idea of using\nsingle-user mode before Postgres 15 feature freeze. How's that going?\n\nI suggest that we apply the following patch as part of that work. It\nadds one last final failsafe check at the point that VACUUM makes a\nfinal decision on rel truncation.\n\nIt seems unlikely that the patch will ever make the crucial difference\nin a wraparound scenario -- in practice it's very likely that we'd\nhave triggered the wraparound at that point if we run into trouble\nwith the target rel's relfrozenxid age. And even if it does get to\nthat point, it would still be possible for the autovacuum launcher to\nlaunch another autovacuum -- this time around we will avoid rel\ntruncation, restoring the system to normal operation (i.e. no more\nxidStopLimit state).\n\nOn the other hand it's possible that lazy_cleanup_all_indexes() will\ntake a very long time to run, and it runs after the current final\nfailsafe check. An index AM's amvacuumcleanup() routine can take a\nlong time to run sometimes, especially with GIN indexes. And so it's\njust about possible that we won't have triggered the failsafe by the\ntime lazy_cleanup_all_indexes() is called, which then spends a long\ntime doing index cleanup -- long enough for the system to reach\nxidStopLimit due to the target rel's relfrozenxid age crossing the\ncrucial xidStopLimit crossover point.\n\nThis patch makes this problem scenario virtually impossible. Right now\nI'm only prepared to say it's very unlikely. I don't see a reason to\ntake any chances, though.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 15 Mar 2022 14:48:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Wed, Mar 16, 2022 at 4:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Feb 16, 2022 at 12:43 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > I'll put some effort in finding any way that it might not be robust.\n> > After that, changing the message and docs is trivial.\n>\n> It would be great to be able to totally drop the idea of using\n> single-user mode before Postgres 15 feature freeze. How's that going?\n\nUnfortunately, I was distracted from this work for a time, and just as\nI had intended to focus on it during March, I was out sick for 2-3\nweeks. I gather from subsequent discussion that a full solution goes\nbeyond just a new warning message and documentation. Either way I'm\nnot quite prepared to address this in time for v15.\n\n> I suggest that we apply the following patch as part of that work. It\n> adds one last final failsafe check at the point that VACUUM makes a\n> final decision on rel truncation.\n\nThat is one thing that was in the back of my mind, and it seems\nreasonable to me.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:51:05 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Thu, Feb 03, 2022 at 01:05:50PM -0500, Robert Haas wrote:\n> On Thu, Dec 9, 2021 at 8:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should move *away* from single user mode, rather than the\n> > opposite. It's a substantial code burden and it's hard to use.\n> \n> Yes. This thread seems to be largely devoted to the topic of making\n> single-user vacuum work better, but I don't see anyone asking the\n> question \"why do we have a message that tells people to vacuum in\n> single user mode in the first place?\". It's basically bad advice,\n\n> The correct thing to do would be to remove\n> the hint as bad advice that we never should have offered in the first\n> place. And so here. We should not try to make vacuum in single\n> user-mode work better or differently, or at least that shouldn't be\n> our primary objective. We should just stop telling people to do it. We\n> should probably add messages and documentation *discouraging* the use\n> of single user mode for recovering from wraparound trouble, exactly\n> the opposite of what we do now. There's nothing we can do in\n> single-user mode that we can't do equally well in multi-user mode. If\n> people try to fix wraparound problems in multi-user mode, they still\n> have read-only access to their database, they can use parallelism,\n> they can use command line utilities like vacuumdb, and they can use\n> psql which has line editing and allows remote access and is a way\n> nicer user experience than running postgres --single. We need a really\n> compelling reason to tell people to give up all those advantages, and\n> there is no such reason. It makes just as much sense as telling people\n> to deal with wraparound problems by angering a live anaconda.\n\nBy chance, I came across this prior thread which advocated the same thing in a\ninitially (rather than indirectly as in this year's thread).\n\nhttps://www.postgresql.org/message-id/flat/CAMT0RQTmRj_Egtmre6fbiMA9E2hM3BsLULiV8W00stwa3URvzA%40mail.gmail.com\n|We should stop telling users to \"vacuum that database in single-user mode\"\n\n\n", "msg_date": "Mon, 27 Jun 2022 14:36:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" }, { "msg_contents": "On Mon, Jun 27, 2022 at 12:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> By chance, I came across this prior thread which advocated the same thing in a\n> initially (rather than indirectly as in this year's thread).\n\nRevisiting this topic reminded me that PostgreSQL 14 (the first\nversion that had the wraparound failsafe mechanism controlled by\nvacuum_failsafe_age) has been a stable release for 9 months now. As of\ntoday I am still not aware of even one user that ran into the failsafe\nmechanism in production. It might well have happened by now, of\ncourse, but I am not aware of any specific case. Perhaps this will\nchange soon enough -- maybe somebody else will read this and enlighten\nme.\n\nTo me the fact that the failsafe seems to seldom kick-in in practice\nsuggests something about workload characteristics in general: that it\nisn't all that common for users to try to get away with putting off\nfreezing until a table attains an age that is significantly above 1\nbillion XIDs.\n\nWhen people talk about things like 64-bit XIDs, I tend to wonder: if 2\nbillion XIDs wasn't enough, why should 4 billion or 8 billion be\nenough? *Maybe* the system can do better by getting even further into\ndebt than it can today, but you can't expect to avoid freezing\naltogether (without significant work elsewhere). My general sense is\nthat freezing isn't a particularly good thing to try to do lazily --\neven if we ignore the risk of an eventual wraparound failure.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jun 2022 13:36:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: do only critical work during single-user vacuum?" } ]
[ { "msg_contents": "Can we change the default setting of track_io_timing to on?\n\nI see a lot of questions, such as over at stackoverflow or\ndba.stackexchange.com, where people ask for help with plans that would be\nmuch more useful were this on. Maybe they just don't know better, maybe\nthey can't turn it on because they are not a superuser.\n\nI can't imagine a lot of people who care much about its performance impact\nwill be running the latest version of PostgreSQL on\nancient/weird systems that have slow clock access. (And the few who do can\njust turn it off for their system).\n\nFor systems with fast user-space clock access, I've never seen this setting\nbeing turned on make a noticeable dent in performance. Maybe I just never\ntested enough in the most adverse scenario (which I guess would be a huge\nFS cache, a small shared buffers, and a high CPU count with constant\nchurning of pages that hit the FS cache but miss shared buffers--not a\nsystem I have handy to do a lot of tests with.)\n\nCheers,\n\nJeff\n\nCan we change the default setting of track_io_timing to on?I see a lot of questions, such as over at stackoverflow or dba.stackexchange.com, where people ask for help with plans that would be much more useful were this on.  Maybe they just don't know better, maybe they can't turn it on because they are not a superuser.I can't imagine a lot of people who care much about its performance impact will be running the latest version of PostgreSQL on ancient/weird systems that have slow clock access. (And the few who do can just turn it off for their system).For systems with fast user-space clock access, I've never seen this setting being turned on make a noticeable dent in performance.  Maybe I just never tested enough in the most adverse scenario (which I guess would be a huge FS cache, a small shared buffers, and a high CPU count with constant churning of pages that hit the FS cache but miss shared buffers--not a system I have handy to do a lot of tests with.)Cheers,Jeff", "msg_date": "Thu, 9 Dec 2021 23:43:37 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "track_io_timing default setting" }, { "msg_contents": "> Can we change the default setting of track_io_timing to on?\n\n+1 for better observability by default.\n\n> I can't imagine a lot of people who care much about its performance impact will be running the latest version of PostgreSQL on ancient/weird systems that have slow clock access. (And the few who do can just turn it off for their system).\n> For systems with fast user-space clock access, I've never seen this setting being turned on make a noticeable dent in performance.  Maybe I just never tested enough in the most adverse scenario (which I guess would be a huge FS cache, a small shared buffers, and a high CPU count with constant churning of pages that hit the FS cache but miss shared buffers--not a system I have handy to do a lot of tests with.)\n\nCoincidently I have some quick notes for measuring the impact of changing the \"clocksource\" on the Linux 5.10.x (real syscall vs vd.so optimization) on PgSQL 13.x as input to the discussion. The thing is that the slow \"xen\" implementation (at least on AWS i3, Amazon Linux 2) is default because apparently time with faster TSC/ RDTSC ones can potentially drift backwards e.g. during potential(?) VM live migration. I haven't seen better way to see what happens under the hood than strace and/or measuring huge no of calls. This only shows of course the impact to the whole PgSQL (with track_io_timing=on), not just impact between track_io_timing=on vs off. IMHO better knowledge (in explain analyze, autovacuum) is worth more than this potential degradation when using slow clocksources.\n\nWith /sys/bus/clocksource/devices/clocksource0/current_clocksource=xen (default on most AWS instances; ins.pgb = simple insert to table with PK only from sequencer.):\n# time ./testclock # 10e9 calls of gettimeofday()\nreal 0m58.999s\nuser 0m35.796s\nsys 0m23.204s\n\n//pgbench \n transaction type: ins.pgb\n scaling factor: 1\n query mode: simple\n number of clients: 8\n number of threads: 2\n duration: 100 s\n number of transactions actually processed: 5511485\n latency average = 0.137 ms\n latency stddev = 0.034 ms\n tps = 55114.743913 (including connections establishing)\n tps = 55115.999449 (excluding connections establishing)\n\nWith /sys/bus/clocksource/devices/clocksource0/current_clocksource=tsc :\n# time ./testclock # 10e9 calls of gettimeofday()\nreal 0m2.415s\nuser 0m2.415s\nsys 0m0.000s # XXX: notice, userland only workload, no %sys part\n\n//pgbench:\n transaction type: ins.pgb\n scaling factor: 1\n query mode: simple\n number of clients: 8\n number of threads: 2\n duration: 100 s\n number of transactions actually processed: 6190406\n latency average = 0.123 ms\n latency stddev = 0.035 ms\n tps = 61903.863938 (including connections establishing)\n tps = 61905.261175 (excluding connections establishing)\n\nIn addition what could be done here - if that XXX note holds true on more platforms - is to measure via rusage() many gettimeofdays() during startup and log warning to consider checking OS clock implementation if it takes relatively too long and/or %sys part is > 0. I dunno what to suggest for the potential time going backwards , but changing track_io_timings=on doesn't feel like it is going to make stuff crash., so again I think it is good idea. \n\n-Jakub Wartak.\n\n\n", "msg_date": "Fri, 10 Dec 2021 11:06:37 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: track_io_timing default setting" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> Can we change the default setting of track_io_timing to on?\n\nThat adds a very significant amount of overhead on some platforms\n(gettimeofday is not cheap if it requires a kernel call). And I\ndoubt the claim that the average Postgres user needs this, and\ndoubt even more that they need it on all the time.\nSo I'm -1 on the idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Dec 2021 10:20:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: track_io_timing default setting" }, { "msg_contents": "On Fri, 2021-12-10 at 10:20 -0500, Tom Lane wrote:\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > Can we change the default setting of track_io_timing to on?\n> \n> That adds a very significant amount of overhead on some platforms\n> (gettimeofday is not cheap if it requires a kernel call).  And I\n> doubt the claim that the average Postgres user needs this, and\n> doubt even more that they need it on all the time.\n> So I'm -1 on the idea.\n\nI set \"track_io_timing\" to \"on\" all the time, same as \"log_lock_waits\",\nso I'd want them both on by default.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 10 Dec 2021 17:22:51 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: track_io_timing default setting" }, { "msg_contents": "On 12/10/21 17:22, Laurenz Albe wrote:\n> On Fri, 2021-12-10 at 10:20 -0500, Tom Lane wrote:\n>> Jeff Janes <jeff.janes@gmail.com> writes:\n>>> Can we change the default setting of track_io_timing to on?\n>>\n>> That adds a very significant amount of overhead on some platforms\n>> (gettimeofday is not cheap if it requires a kernel call).  And I\n>> doubt the claim that the average Postgres user needs this, and\n>> doubt even more that they need it on all the time.\n>> So I'm -1 on the idea.\n> \n> I set \"track_io_timing\" to \"on\" all the time, same as \"log_lock_waits\",\n> so I'd want them both on by default.\n> \n\nIMHO those options have very different overhead - log_lock_waits logs \nonly stuff that exceeds deadlock_timeout (1s by default), so the amount \nof gettimeofday() calls is miniscule compared to calling it for every \nI/O request.\n\nI wonder if we could simply do the thing we usually do when measuring \nexpensive stuff - measure just a small sample. That is, we wouldn't \nmeasure timing for every I/O request, but just a small fraction. For \ncases with a lot of I/O requests that should give pretty good image.\n\nThat's not a simple \"change GUC default\" patch, but it's not a very \ncomplicated patch either.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Dec 2021 17:33:28 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: track_io_timing default setting" }, { "msg_contents": ">-----Original Message-----\n>Sent: Friday, December 10, 2021 9:20 AM\n>To: Jeff Janes <jeff.janes@gmail.com>\n>Cc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\n>Subject: [EXTERNAL] Re: track_io_timing default setting\n\n>Jeff Janes <jeff.janes@gmail.com> writes:\n> Can we change the default setting of track_io_timing to on?\n\n>That adds a very significant amount of overhead on some platforms (gettimeofday is not cheap if it requires a kernel call). And I doubt the claim that the average Postgres user needs this, and doubt even more >that they need it on all the time.\n>So I'm -1 on the idea.\n\n\t\t\tregards, tom lane\n>\n\nIn all honesty, the term \"significant amount of overhead on some platforms\" is ambiguous. Exactly how much overhead and on what platforms??? I would prefer the document to say something on the order of:\n\n\t\"Enables timing of database I/O calls. This parameter is historically off by default, because it will repeatedly query the operating system for the current time, which may increase overhead costs of \telapsed time for each IO. Platforms known to incur a problematic overhead are, <etc, etc, etc>. To measure the overhead of timing on your system, use the pg_test_timing tool. This overhead may \tbecome a performance issue when less than 90% of the tests execute for more than 1 microsecond (us). Please refer to the pg_test_timing tool page for more details\"\n\nI have the timing always turned on, but that doesn't necessarily mean the default should be changed. However the documentation should be changed as the current phrasing would probably discourage some folks from even trying. I ran the pg_test_timing tool and it came out to .000000023 seconds overhead. Since we typically measure IO in terms of milliseconds, this number is statistically insignificant.\n\nAs long as we're on the topic, the documentation for the pg_test_timing tool as well as the output of the tool have something to be desired. The tool output looks like this:\n\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 23.02 ns\nHistogram of timing durations:\n < us % of total count\n 1 97.70191 127332403\n 2 2.29729 2993997\n 4 0.00007 90\n 8 0.00069 904\n 16 0.00004 57\n\nTake note of the comment: \"Per loop time including overhead\" - so does that means the overhead IS LESS than the reported 23.02 ns? Is that an issue with the actual test code or the output prose? Furthermore the tool's doc goes on to things like this:\n\n\t\"The i7-860 system measured runs the count query in 9.8 ms while the EXPLAIN ANALYZE version takes 16.6 ms, each processing just over 100,000 rows. That 6.8 ms difference means the timing \toverhead per row is 68 ns, about twice what pg_test_timing estimated it would be. Even that relatively small amount of overhead is making the fully timed count statement take almost 70% longer. On \tmore substantial queries, the timing overhead would be less problematic.\"\n\nIMHO this is misleading. This timing process is what EXPLAIN ANALYZE does and most likely completely unrelated to the topic in question - that is turning on io timing! What this paragraph is implying through the reader's chain of events is that IF you turn on track_io_timing you may result in a 70% overhead!!! Umm - really???\n\nLong story short, I'm perfectly fine with this 'overhead' - unless someone wants to refute this.\nRegards,\nphil\n\n\n\n\n\n\n\n", "msg_date": "Fri, 10 Dec 2021 17:46:01 +0000", "msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>", "msg_from_op": false, "msg_subject": "RE: [EXTERNAL] Re: track_io_timing default setting" }, { "msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Fri, 2021-12-10 at 10:20 -0500, Tom Lane wrote:\n> > Jeff Janes <jeff.janes@gmail.com> writes:\n> > > Can we change the default setting of track_io_timing to on?\n> > \n> > That adds a very significant amount of overhead on some platforms\n> > (gettimeofday is not cheap if it requires a kernel call).  And I\n> > doubt the claim that the average Postgres user needs this, and\n> > doubt even more that they need it on all the time.\n> > So I'm -1 on the idea.\n> \n> I set \"track_io_timing\" to \"on\" all the time, same as \"log_lock_waits\",\n> so I'd want them both on by default.\n\nSame. I'd also push back and ask what modern platforms still require a\nkernel call for gettimeofday, and are we really doing ourselves a favor\nby holding back on enabling this by default due to those? If it's such\nan issue, could we figure out a way to have an 'auto' option where we\ndetect if the platform has such an issue and disable in that case, but\nenable otherwise?\n\nThanks,\n\nStephen", "msg_date": "Wed, 22 Dec 2021 14:16:16 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: track_io_timing default setting" }, { "msg_contents": "On Wed, Dec 22, 2021 at 11:16 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I set \"track_io_timing\" to \"on\" all the time, same as \"log_lock_waits\",\n> > so I'd want them both on by default.\n>\n> Same. I'd also push back and ask what modern platforms still require a\n> kernel call for gettimeofday, and are we really doing ourselves a favor\n> by holding back on enabling this by default due to those?\n\n+1\n\n> If it's such\n> an issue, could we figure out a way to have an 'auto' option where we\n> detect if the platform has such an issue and disable in that case, but\n> enable otherwise?\n\nThis is the same principle behind wal_sync_method's per-platform\ndefault, of course. Seems like a similar case to me.\n\nI think that the following heuristic might be a good one: If the\nplatform uses clock_gettime() (for INSTR_TIME_SET_CURRENT() stuff),\nthen enable track_io_timing by default on that platform. Otherwise,\ndisable it by default. (Plus do whatever makes the most sense on\nWindows, which uses something else entirely.)\n\nThe issue with gettimeofday() seems to be that it isn't really\nintended for the same purpose as clock_gettime() -- it's just for\ngetting the time, not measuring elapsed time. It seems reasonable to\nsuppose that an operating system that offers a facility for measuring\nelapsed time won't have horrible performance problems. clock_gettime()\nfirst appeared in POSIX almost 30 years ago.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Dec 2021 11:57:46 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: track_io_timing default setting" } ]
[ { "msg_contents": "pg_strtouint64() is a wrapper around strtoull/strtoul/_strtoui64, but it \nseems no longer necessary to have this indirection.\n\nmsvc/Solution.pm claims HAVE_STRTOULL, so the \"MSVC only\" part seems \nunnecessary. Also, we have code in c.h to substitute alternatives for \nstrtoull() if not found, and that would appear to cover all currently \nsupported platforms, so having a further fallback in pg_strtouint64() \nseems unnecessary.\n\n(AFAICT, the only buildfarm member that does not have strtoull() \ndirectly but relies on the code in c.h is gaur. So we can hang on to \nthat code for a while longer, but its utility is also fading away.)\n\nTherefore, remove pg_strtouint64(), and use strtoull() directly in all \ncall sites.\n\n(This is also useful because we have pg_strtointNN() functions that have \na different API than this pg_strtouintNN(). So removing the latter \nmakes this problem go away.)", "msg_date": "Fri, 10 Dec 2021 08:06:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Therefore, remove pg_strtouint64(), and use strtoull() directly in all \n> call sites.\n\nOur experience with the variable size of \"long\" has left a sufficiently\nbad taste in my mouth that I'm not enthused about adding hard-wired\nassumptions that \"long long\" is identical to int64. So this seems like\nit's going in the wrong direction, and giving up portability that we\nmight want back someday.\n\nI'd be okay with making pg_strtouint64 into a really thin wrapper\n(ie a macro, at least on most platforms). But please let's not\ngive up the notational distinction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Dec 2021 10:25:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "On 10.12.21 16:25, Tom Lane wrote:\n> Our experience with the variable size of \"long\" has left a sufficiently\n> bad taste in my mouth that I'm not enthused about adding hard-wired\n> assumptions that \"long long\" is identical to int64. So this seems like\n> it's going in the wrong direction, and giving up portability that we\n> might want back someday.\n\nWhat kind of scenario do you have in mind? Someone making their long \nlong int 128 bits?\n\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:44:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 10.12.21 16:25, Tom Lane wrote:\n>> Our experience with the variable size of \"long\" has left a sufficiently\n>> bad taste in my mouth that I'm not enthused about adding hard-wired\n>> assumptions that \"long long\" is identical to int64. So this seems like\n>> it's going in the wrong direction, and giving up portability that we\n>> might want back someday.\n\n> What kind of scenario do you have in mind? Someone making their long \n> long int 128 bits?\n\nYeah, exactly. That seems like a natural evolution:\n\tshort -> 2 bytes\n\tint -> 4 bytes\n\tlong -> 8 bytes\n\tlong long -> 16 bytes\nso I'm surprised that vendors haven't done that already instead\nof inventing hacks like __int128.\n\nOur current hard-coded uses of long long are all written on the\nassumption that it's *at least* 64 bits, so we'd survive OK on\nsuch a platform so long as we don't start confusing it with\n*exactly* 64 bits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Dec 2021 09:44:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "On Mon, Dec 13, 2021 at 9:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, exactly. That seems like a natural evolution:\n> short -> 2 bytes\n> int -> 4 bytes\n> long -> 8 bytes\n> long long -> 16 bytes\n> so I'm surprised that vendors haven't done that already instead\n> of inventing hacks like __int128.\n\nI really am glad they haven't. I think it's super-annoying that we\nneed hacks like UINT64_FORMAT all over the place. I think it was a\nmistake not to nail down the size that each type is expected to be in\nthe original C standard, and making more changes to the conventions\nnow would cause a whole bunch of unnecessary code churn, probably for\nalmost everybody using C. It's not like people are writing high-level\napplications in C these days; it's all low-level stuff that is likely\nto care about the width of a word. It seems much more sensible to\nstandardize on names for words of all lengths in the standard than to\ndo anything else. I don't really care whether the standard chooses\nint128, int256, int512, etc. or long long long, long long long long,\netc. or reallylong, superlong, incrediblylong, etc. but I hope they\ndefine new stuff instead of encouraging implementations to redefine\nwhat's there already.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:29:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I really am glad they haven't. I think it's super-annoying that we\n> need hacks like UINT64_FORMAT all over the place. I think it was a\n> mistake not to nail down the size that each type is expected to be in\n> the original C standard,\n\nWell, mumble. One must remember that when C was designed, there was\na LOT more variability in hardware designs than we see today. They\ncould not have put a language with fixed ideas about datatype widths\nonto, say, PDP-10s (36-bit words) or Crays (60-bit, IIRC). But it\nis a darn shame that people weren't more consistent about mapping\nthe C types onto machines with S/360-like addressing.\n\n> and making more changes to the conventions\n> now would cause a whole bunch of unnecessary code churn, probably for\n> almost everybody using C.\n\nThe error in your thinking is believing that there *is* a convention.\nThere isn't; see \"long\".\n\nAnyway, my point is that we have created a set of type names that\nhave the semantics we want, and we should avoid confusing those with\nunderlying C types that are *not* guaranteed to be the same thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:46:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "On Mon, Dec 13, 2021 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I really am glad they haven't. I think it's super-annoying that we\n> > need hacks like UINT64_FORMAT all over the place. I think it was a\n> > mistake not to nail down the size that each type is expected to be in\n> > the original C standard,\n>\n> Well, mumble. One must remember that when C was designed, there was\n> a LOT more variability in hardware designs than we see today. They\n> could not have put a language with fixed ideas about datatype widths\n> onto, say, PDP-10s (36-bit words) or Crays (60-bit, IIRC). But it\n> is a darn shame that people weren't more consistent about mapping\n> the C types onto machines with S/360-like addressing.\n\nSure.\n\n> > and making more changes to the conventions\n> > now would cause a whole bunch of unnecessary code churn, probably for\n> > almost everybody using C.\n>\n> The error in your thinking is believing that there *is* a convention.\n> There isn't; see \"long\".\n\nI mean I pretty much pointed out exactly that thing with my mention of\nUINT64_FORMAT, so I'm not sure why you're making it seem like I didn't\nknow that.\n\n> Anyway, my point is that we have created a set of type names that\n> have the semantics we want, and we should avoid confusing those with\n> underlying C types that are *not* guaranteed to be the same thing.\n\nI agree entirely, but it's still an annoyance when dealing with printf\nformat codes and other operating-system defined types whose width we\ndon't know. Standardization here makes it easier to write good code;\ndifferent conventions make it harder. I'm guessing that other people\nhave noticed that too, and that's why we're getting stuff like\n__int128 instead of redefining long long.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 11:00:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "On 13.12.21 15:44, Tom Lane wrote:\n> Our current hard-coded uses of long long are all written on the\n> assumption that it's*at least* 64 bits, so we'd survive OK on\n> such a platform so long as we don't start confusing it with\n> *exactly* 64 bits.\n\nOK, makes sense. Here is an alternative patch. It introduces two \nlight-weight macros strtoi64() and strtou64() (compare e.g., strtoimax() \nin POSIX) in c.h and removes pg_strtouint64(). This moves the \nportability layer from numutils.c to c.h, so it's closer to the rest of \nthe int64 portability code. And that way it is available to not just \nserver code. And it resolves the namespace collision with the \npg_strtointNN() functions in numutils.c.", "msg_date": "Tue, 14 Dec 2021 21:42:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> OK, makes sense. Here is an alternative patch. It introduces two \n> light-weight macros strtoi64() and strtou64() (compare e.g., strtoimax() \n> in POSIX) in c.h and removes pg_strtouint64(). This moves the \n> portability layer from numutils.c to c.h, so it's closer to the rest of \n> the int64 portability code. And that way it is available to not just \n> server code. And it resolves the namespace collision with the \n> pg_strtointNN() functions in numutils.c.\n\nWorks for me. I'm not in a position to verify that this'll work\non Windows, but the buildfarm will tell us that quickly enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 17:30:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove pg_strtouint64(), use strtoull() directly" } ]
[ { "msg_contents": "Hello,\n\nOur customer thinks he has found a memory leak on ECPG and AIX.\n\nThe code is quite simple. It declares a cursor, opens it, and fetches the\nonly line available in the table many times. After some time, the client\ncrashes with a segfault error. According to him, it consumed around 256MB.\nWhat's weird is that it works great on Linux, but crashed on AIX. One\ncoworker thought it could be the compiler. Our customer used cc, but he\nalso tried with gcc, and got the same error.\n\nThe test case is attached (testcase.pgc is the ECPG code, testcase.sh is\nwhat our customer used to precompile and compile his code). Do you have any\nidea why that happens on AIX?\n\nTwo queries to create the table and populate it with a single record:\n\nCREATE TABLE foo(\n key integer PRIMARY KEY,\n value character varying(20)\n);\nINSERT INTO foo values (1, 'one');\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.", "msg_date": "Fri, 10 Dec 2021 15:40:50 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Fri, Dec 10, 2021 at 03:40:50PM +0100, Guillaume Lelarge wrote:\n> Hello,\n> \n> Our customer thinks he has found a memory leak on ECPG and AIX.\n> \n> The code is quite simple. It declares a cursor, opens it, and fetches the\n> only line available in the table many times. After some time, the client\n> crashes with a segfault error. According to him, it consumed around 256MB.\n> What's weird is that it works great on Linux, but crashed on AIX. One\n> coworker thought it could be the compiler. Our customer used cc, but he\n> also tried with gcc, and got the same error.\n\nA memory leak isn't the same as a segfault (although I don't know how AIX\nresponds to OOM).\n\nCan you show that it's a memory leak ? Show RAM use increasing continuously\nand linearly with loop count.\n\nHow many loops does it take to crash ?\n\nCould you obtain a backtrace ?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 11 Dec 2021 00:52:36 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Le sam. 11 déc. 2021 à 07:52, Justin Pryzby <pryzby@telsasoft.com> a écrit :\n\n> On Fri, Dec 10, 2021 at 03:40:50PM +0100, Guillaume Lelarge wrote:\n> > Hello,\n> >\n> > Our customer thinks he has found a memory leak on ECPG and AIX.\n> >\n> > The code is quite simple. It declares a cursor, opens it, and fetches the\n> > only line available in the table many times. After some time, the client\n> > crashes with a segfault error. According to him, it consumed around\n> 256MB.\n> > What's weird is that it works great on Linux, but crashed on AIX. One\n> > coworker thought it could be the compiler. Our customer used cc, but he\n> > also tried with gcc, and got the same error.\n>\n> A memory leak isn't the same as a segfault (although I don't know how AIX\n> responds to OOM).\n>\n> Can you show that it's a memory leak ? Show RAM use increasing\n> continuously\n> and linearly with loop count.\n>\n> How many loops does it take to crash ?\n>\n> Could you obtain a backtrace ?\n>\n>\nThanks. I'll try to get all these informations, but it won't be before\nmonday.\n\n\n-- \nGuillaume.\n\nLe sam. 11 déc. 2021 à 07:52, Justin Pryzby <pryzby@telsasoft.com> a écrit :On Fri, Dec 10, 2021 at 03:40:50PM +0100, Guillaume Lelarge wrote:\n> Hello,\n> \n> Our customer thinks he has found a memory leak on ECPG and AIX.\n> \n> The code is quite simple. It declares a cursor, opens it, and fetches the\n> only line available in the table many times. After some time, the client\n> crashes with a segfault error. According to him, it consumed around 256MB.\n> What's weird is that it works great on Linux, but crashed on AIX. One\n> coworker thought it could be the compiler. Our customer used cc, but he\n> also tried with gcc, and got the same error.\n\nA memory leak isn't the same as a segfault (although I don't know how AIX\nresponds to OOM).\n\nCan you show that it's a memory leak ?  Show RAM use increasing continuously\nand linearly with loop count.\n\nHow many loops does it take to crash ?\n\nCould you obtain a backtrace ?\nThanks. I'll try to get all these informations, but it won't be before monday.-- Guillaume.", "msg_date": "Sat, 11 Dec 2021 08:49:25 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Fri, Dec 10, 2021 at 03:40:50PM +0100, Guillaume Lelarge wrote:\n> After some time, the client\n> crashes with a segfault error. According to him, it consumed around 256MB.\n> What's weird is that it works great on Linux, but crashed on AIX.\n\nThat almost certainly means he's using a 32-bit binary with the default heap\nsize. To use more heap on AIX, build 64-bit or override the heap size. For\nexample, \"env LDR_CNTRL=MAXDATA=0x80000000 ./a.out\" gives 2GiB of heap. See\nhttps://www.postgresql.org/docs/devel/installation-platform-notes.html#INSTALLATION-NOTES-AIX\nfor more ways to control heap size. While that documentation focuses on the\nserver, the same techniques apply to clients like your test program.\n\nThat said, I don't know why your test program reaches 256MB on AIX. On\nGNU/Linux, it uses a lot less. What version of PostgreSQL provided your\nclient libraries?\n\n\n", "msg_date": "Sat, 11 Dec 2021 23:34:11 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Hi,\n\n(I work with Guillaume on this case.)\n\nOn Sun, Dec 12, 2021 at 8:34 AM Noah Misch <noah@leadboat.com> wrote:\n\n> That almost certainly means he's using a 32-bit binary with the default\n> heap\n> size. To use more heap on AIX, build 64-bit or override the heap size.\n> For\n> example, \"env LDR_CNTRL=MAXDATA=0x80000000 ./a.out\" gives 2GiB of heap.\n> See\n>\n> https://www.postgresql.org/docs/devel/installation-platform-notes.html#INSTALLATION-NOTES-AIX\n> for more ways to control heap size. While that documentation focuses on\n> the\n> server, the same techniques apply to clients like your test program.\n>\n> That said, I don't know why your test program reaches 256MB on AIX. On\n> GNU/Linux, it uses a lot less. What version of PostgreSQL provided your\n> client libraries?\n>\n\nThey use a 12.3 in production but have also tested on a 12.9 with the same\nresult.\nWe relayed your suggestion and will get back to you with the results.\n\nThanks for the input !\n\nHi, (I work with Guillaume on this case.)On Sun, Dec 12, 2021 at 8:34 AM Noah Misch <noah@leadboat.com> wrote:\nThat almost certainly means he's using a 32-bit binary with the default heap\nsize.  To use more heap on AIX, build 64-bit or override the heap size.  For\nexample, \"env LDR_CNTRL=MAXDATA=0x80000000 ./a.out\" gives 2GiB of heap.  See\nhttps://www.postgresql.org/docs/devel/installation-platform-notes.html#INSTALLATION-NOTES-AIX\nfor more ways to control heap size.  While that documentation focuses on the\nserver, the same techniques apply to clients like your test program.\n\nThat said, I don't know why your test program reaches 256MB on AIX.  On\nGNU/Linux, it uses a lot less.  What version of PostgreSQL provided your\nclient libraries?They use a 12.3 in production but have also tested on a  12.9 with the same result.We relayed your suggestion and will get back to you with the results.Thanks for the input !", "msg_date": "Mon, 13 Dec 2021 14:07:36 +0100", "msg_from": "talk to ben <blo.talkto@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Our client confirmed that the more he fetches the more memory is consumed.\nThe segfault was indeed caused by the absence of LDR_CNTRL.\n\nThe tests show that:\n\n* without LDR_CNTRL, we reach 256Mb and segfault ;\n* with LDR_CNTRL=MAXDATA=0x10000000, we reach 256Mo but there is no\nsegfault, the program just continues running ;\n* with LDR_CNTRL=MAXDATA=0x80000000, we reach 2Go and there is no segfault\neither, the program just continues running.\n\nOur client confirmed that the more he fetches the more memory is consumed.The segfault was indeed caused by the absence of LDR_CNTRL.The tests show that:\n\n* without LDR_CNTRL, we reach 256Mb and segfault ;\n* with LDR_CNTRL=MAXDATA=0x10000000, we reach 256Mo but there is no segfault, the program just continues running ;\n* with LDR_CNTRL=MAXDATA=0x80000000, we reach 2Go and there is no segfault either, the program just continues running.", "msg_date": "Wed, 15 Dec 2021 16:20:42 +0100", "msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Wed, Dec 15, 2021 at 04:20:42PM +0100, Benoit Lobr�au wrote:\n> * with LDR_CNTRL=MAXDATA=0x10000000, we reach 256Mo but there is no\n> segfault, the program just continues running ;\n> * with LDR_CNTRL=MAXDATA=0x80000000, we reach 2Go and there is no segfault\n> either, the program just continues running.\n\nI get the same results. The leak arises because AIX freelocale() doesn't free\nall memory allocated in newlocale(). The following program uses trivial\nmemory on GNU/Linux, but it leaks like you're seeing on AIX:\n\n#include <locale.h>\nint main(int argc, char **argv)\n{\n\twhile (1)\n\t\tfreelocale(newlocale(LC_NUMERIC_MASK, \"C\", (locale_t) 0));\n\treturn 0;\n}\n\nIf you have access to file an AIX bug, I recommend doing so. If we want\nPostgreSQL to work around this, one idea is to have ECPG do this newlocale()\nless often. For example, do it once per process or once per connection\ninstead of once per ecpg_do_prologue().\n\n\n", "msg_date": "Fri, 31 Dec 2021 23:40:55 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I get the same results. The leak arises because AIX freelocale() doesn't free\n> all memory allocated in newlocale(). The following program uses trivial\n> memory on GNU/Linux, but it leaks like you're seeing on AIX:\n\nBleah.\n\n> If you have access to file an AIX bug, I recommend doing so. If we want\n> PostgreSQL to work around this, one idea is to have ECPG do this newlocale()\n> less often. For example, do it once per process or once per connection\n> instead of once per ecpg_do_prologue().\n\nIt's worse than that: see also ECPGget_desc(). Seems like a case\ncould be made for doing something about this just on the basis\nof cycles expended, never mind freelocale() bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Jan 2022 11:35:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Sat, Jan 01, 2022 at 11:35:02AM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > I get the same results. The leak arises because AIX freelocale() doesn't free\n> > all memory allocated in newlocale(). The following program uses trivial\n> > memory on GNU/Linux, but it leaks like you're seeing on AIX:\n> \n> Bleah.\n> \n> > If you have access to file an AIX bug, I recommend doing so. If we want\n> > PostgreSQL to work around this, one idea is to have ECPG do this newlocale()\n> > less often. For example, do it once per process or once per connection\n> > instead of once per ecpg_do_prologue().\n> \n> It's worse than that: see also ECPGget_desc(). Seems like a case\n> could be made for doing something about this just on the basis\n> of cycles expended, never mind freelocale() bugs.\n\nAgreed. Once per process seems best. I only hesitated before since it means\nnothing will free this storage, which could be annoying in the context of\nValgrind and similar. However, ECPG already has bits of never-freed memory in\nthe form of pthread_key_create() calls having no pthread_key_delete(), so I\ndon't mind adding a bit more.\n\n\n", "msg_date": "Sat, 1 Jan 2022 16:07:50 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Le dim. 2 janv. 2022 à 01:07, Noah Misch <noah@leadboat.com> a écrit :\n\n> On Sat, Jan 01, 2022 at 11:35:02AM -0500, Tom Lane wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > I get the same results. The leak arises because AIX freelocale()\n> doesn't free\n> > > all memory allocated in newlocale(). The following program uses\n> trivial\n> > > memory on GNU/Linux, but it leaks like you're seeing on AIX:\n> >\n> > Bleah.\n> >\n> > > If you have access to file an AIX bug, I recommend doing so. If we\n> want\n> > > PostgreSQL to work around this, one idea is to have ECPG do this\n> newlocale()\n> > > less often. For example, do it once per process or once per connection\n> > > instead of once per ecpg_do_prologue().\n> >\n> > It's worse than that: see also ECPGget_desc(). Seems like a case\n> > could be made for doing something about this just on the basis\n> > of cycles expended, never mind freelocale() bugs.\n>\n> Agreed. Once per process seems best. I only hesitated before since it\n> means\n> nothing will free this storage, which could be annoying in the context of\n> Valgrind and similar. However, ECPG already has bits of never-freed\n> memory in\n> the form of pthread_key_create() calls having no pthread_key_delete(), so I\n> don't mind adding a bit more.\n>\n\nDid this get anywhere? Is there something we could do to make this move\nforward?\n\n\n-- \nGuillaume.\n\nLe dim. 2 janv. 2022 à 01:07, Noah Misch <noah@leadboat.com> a écrit :On Sat, Jan 01, 2022 at 11:35:02AM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > I get the same results.  The leak arises because AIX freelocale() doesn't free\n> > all memory allocated in newlocale().  The following program uses trivial\n> > memory on GNU/Linux, but it leaks like you're seeing on AIX:\n> \n> Bleah.\n> \n> > If you have access to file an AIX bug, I recommend doing so.  If we want\n> > PostgreSQL to work around this, one idea is to have ECPG do this newlocale()\n> > less often.  For example, do it once per process or once per connection\n> > instead of once per ecpg_do_prologue().\n> \n> It's worse than that: see also ECPGget_desc().  Seems like a case\n> could be made for doing something about this just on the basis\n> of cycles expended, never mind freelocale() bugs.\n\nAgreed.  Once per process seems best.  I only hesitated before since it means\nnothing will free this storage, which could be annoying in the context of\nValgrind and similar.  However, ECPG already has bits of never-freed memory in\nthe form of pthread_key_create() calls having no pthread_key_delete(), so I\ndon't mind adding a bit more.\nDid this get anywhere? Is there something we could do to make this move forward?-- Guillaume.", "msg_date": "Fri, 25 Mar 2022 10:40:51 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Did this get anywhere? Is there something we could do to make this move\n> forward?\n\nNo. Write a patch?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:25:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Le ven. 25 mars 2022, 14:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > Did this get anywhere? Is there something we could do to make this move\n> > forward?\n>\n> No. Write a patch?\n>\n\nI wouldn't have asked if I could write such a patch :-)\n\nLe ven. 25 mars 2022, 14:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Did this get anywhere? Is there something we could do to make this move\n> forward?\n\nNo.  Write a patch?I wouldn't have asked if I could write such a patch :-)", "msg_date": "Fri, 25 Mar 2022 16:59:00 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Sat, Jan 01, 2022 at 04:07:50PM -0800, Noah Misch wrote:\n> On Sat, Jan 01, 2022 at 11:35:02AM -0500, Tom Lane wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > I get the same results. The leak arises because AIX freelocale() doesn't free\n> > > all memory allocated in newlocale(). The following program uses trivial\n> > > memory on GNU/Linux, but it leaks like you're seeing on AIX:\n> > \n> > Bleah.\n> > \n> > > If you have access to file an AIX bug, I recommend doing so. If we want\n> > > PostgreSQL to work around this, one idea is to have ECPG do this newlocale()\n> > > less often. For example, do it once per process or once per connection\n> > > instead of once per ecpg_do_prologue().\n> > \n> > It's worse than that: see also ECPGget_desc(). Seems like a case\n> > could be made for doing something about this just on the basis\n> > of cycles expended, never mind freelocale() bugs.\n> \n> Agreed. Once per process seems best. I only hesitated before since it means\n> nothing will free this storage, which could be annoying in the context of\n> Valgrind and similar. However, ECPG already has bits of never-freed memory in\n> the form of pthread_key_create() calls having no pthread_key_delete(), so I\n> don't mind adding a bit more.\n\nThe comparison to pthread_key_create() wasn't completely fair. While POSIX\nwelcomes pthread_key_create() to fail with ENOMEM, the glibc implementation\nappears not to allocate memory. Even so, I'm okay leaking one newlocale() per\nprocess lifetime.\n\nI had expected to use pthread_once() for the newlocale() call, but there would\nbe no useful way to report failure and try again later. Instead, I called\nnewlocale() while ECPGconnect() holds connections_mutex. See log message and\ncomments for details. I tested \"./configure ac_cv_func_uselocale=no ...\" and\ntested the scenario of newlocale() failing every time.", "msg_date": "Sun, 17 Apr 2022 21:16:45 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I had expected to use pthread_once() for the newlocale() call, but there would\n> be no useful way to report failure and try again later. Instead, I called\n> newlocale() while ECPGconnect() holds connections_mutex. See log message and\n> comments for details. I tested \"./configure ac_cv_func_uselocale=no ...\" and\n> tested the scenario of newlocale() failing every time.\n\nThis looks solid to me. The only nit I can find to pick is that I'd\nhave added one more comment, along the lines of\n\ndiff --git a/src/interfaces/ecpg/ecpglib/connect.c b/src/interfaces/ecpg/ecpglib/connect.c\nindex 9f958b822c..96f99ae072 100644\n--- a/src/interfaces/ecpg/ecpglib/connect.c\n+++ b/src/interfaces/ecpg/ecpglib/connect.c\n@@ -508,6 +508,11 @@ ECPGconnect(int lineno, int c, const char *name, const char *user, const char *p\n #ifdef ENABLE_THREAD_SAFETY\n \tpthread_mutex_lock(&connections_mutex);\n #endif\n+\n+\t/*\n+\t * ... but first, make certain we have created ecpg_clocale. Rely on\n+\t * holding connections_mutex to ensure this is done by only one thread.\n+\t */\n #ifdef HAVE_USELOCALE\n \tif (!ecpg_clocale)\n \t{\n\nI've marked it RFC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 14:53:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Sat, Jul 02, 2022 at 02:53:34PM -0400, Tom Lane wrote:\n> This looks solid to me. The only nit I can find to pick is that I'd\n> have added one more comment, along the lines of\n> \n> diff --git a/src/interfaces/ecpg/ecpglib/connect.c b/src/interfaces/ecpg/ecpglib/connect.c\n> index 9f958b822c..96f99ae072 100644\n> --- a/src/interfaces/ecpg/ecpglib/connect.c\n> +++ b/src/interfaces/ecpg/ecpglib/connect.c\n> @@ -508,6 +508,11 @@ ECPGconnect(int lineno, int c, const char *name, const char *user, const char *p\n> #ifdef ENABLE_THREAD_SAFETY\n> \tpthread_mutex_lock(&connections_mutex);\n> #endif\n> +\n> +\t/*\n> +\t * ... but first, make certain we have created ecpg_clocale. Rely on\n> +\t * holding connections_mutex to ensure this is done by only one thread.\n> +\t */\n> #ifdef HAVE_USELOCALE\n> \tif (!ecpg_clocale)\n> \t{\n> \n> I've marked it RFC.\n\nThanks for reviewing. Pushed with that comment. prairiedog complains[1]:\n\n ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option\n connect.o definition of common _ecpg_clocale (size 4)\n\nI bet this would fix it:\n\n--- a/src/interfaces/ecpg/ecpglib/connect.c\n+++ b/src/interfaces/ecpg/ecpglib/connect.c\n@@ -11,7 +11,7 @@\n #include \"sqlca.h\"\n \n #ifdef HAVE_USELOCALE\n-locale_t\tecpg_clocale;\n+locale_t\tecpg_clocale = (locale_t) 0;\n #endif\n \n #ifdef ENABLE_THREAD_SAFETY\n\nI hear[1] adding -fno-common to compiler options would also fix that. Still,\nin the absence of other opinions, I'll just add the no-op initialization.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2022-07-03%2001%3A14%3A19\n[2] https://gcc.gnu.org/legacy-ml/gcc/2005-06/msg00378.html\n\n\n", "msg_date": "Sat, 2 Jul 2022 20:06:19 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Thanks for reviewing. Pushed with that comment. prairiedog complains[1]:\n> ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option\n> connect.o definition of common _ecpg_clocale (size 4)\n\nBlah.\n\n> I bet this would fix it:\n\n> -locale_t\tecpg_clocale;\n> +locale_t\tecpg_clocale = (locale_t) 0;\n\nHmm, I was considering suggesting that just on stylistic grounds,\nbut decided it was too nitpicky even for me.\nDo you want me to test it on prairiedog?\n\n> I hear[1] adding -fno-common to compiler options would also fix that.\n\nI've got -fno-common turned on on my other macOS animals, but in\nthose cases I did it to detect bugs not fix them. I'm not sure\nwhether prairiedog's ancient toolchain has that switch at all,\nor whether it behaves the same as in more recent platforms.\nStill, that gcc.gnu.org message you cite is of the right era.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 23:37:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Sat, Jul 02, 2022 at 11:37:08PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Thanks for reviewing. Pushed with that comment. prairiedog complains[1]:\n> > ld: common symbols not allowed with MH_DYLIB output format with the -multi_module option\n> > connect.o definition of common _ecpg_clocale (size 4)\n> \n> Blah.\n> \n> > I bet this would fix it:\n> \n> > -locale_t\tecpg_clocale;\n> > +locale_t\tecpg_clocale = (locale_t) 0;\n> \n> Hmm, I was considering suggesting that just on stylistic grounds,\n> but decided it was too nitpicky even for me.\n> Do you want me to test it on prairiedog?\n\nSure, if it's easy enough. If not, I'm 87% sure it will suffice.\n\n\n", "msg_date": "Sat, 2 Jul 2022 20:43:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Jul 02, 2022 at 11:37:08PM -0400, Tom Lane wrote:\n>> Do you want me to test it on prairiedog?\n\n> Sure, if it's easy enough. If not, I'm 87% sure it will suffice.\n\nOn it now, but it'll take a few minutes :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 23:45:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "I wrote:\n> On it now, but it'll take a few minutes :-(\n\nConfirmed that either initializing ecpg_clocale or adding -fno-common\nallows the ecpglib build to succeed. (Testing it beyond that would\nrequire another hour or so to build the rest of the system, so I won't.)\n\nAs I said, I was already leaning to the idea that initializing the\nvariable explicitly is better style, so I recommend we do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Jul 2022 23:59:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" }, { "msg_contents": "On Sat, Jul 02, 2022 at 11:59:58PM -0400, Tom Lane wrote:\n> Confirmed that either initializing ecpg_clocale or adding -fno-common\n> allows the ecpglib build to succeed. (Testing it beyond that would\n> require another hour or so to build the rest of the system, so I won't.)\n> \n> As I said, I was already leaning to the idea that initializing the\n> variable explicitly is better style, so I recommend we do that.\n\nWorks for me. Pushed that way, and things have been clean.\n\n\n", "msg_date": "Sun, 3 Jul 2022 22:59:22 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Probable memory leak with ECPG and AIX" } ]
[ { "msg_contents": "Hi,\r\n\r\nHave a tiny patch to add an implementation of spin_delay() for Arm64 processors to match behavior with x86's PAUSE instruction. See negligible benefit on the pgbench tpcb-like workload so at worst it appears to do no harm but should help some workloads that experience some lock contention that need to spin.\r\n\r\nThanks,\r\nGeoffrey Blake", "msg_date": "Fri, 10 Dec 2021 17:44:36 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "\"Blake, Geoff\" <blakgeof@amazon.com> writes:\n> Have a tiny patch to add an implementation of spin_delay() for Arm64 processors to match behavior with x86's PAUSE instruction. See negligible benefit on the pgbench tpcb-like workload so at worst it appears to do no harm but should help some workloads that experience some lock contention that need to spin.\n\nGiven the very wide variety of ARM implementations out there,\nI'm not sure that we want to take a patch like this on the basis of\nexactly zero evidence. It could as easily be a net loss as a win.\nWhat did you test exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Dec 2021 14:16:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Hi Tom,\r\n\r\n> What did you test exactly?\r\n\r\nTested 3 benchmark configurations on an m6g.16xlarge (Graviton2, 64 cpus, 256GB RAM)\r\nI set the scale factor to consume about 1/3 of 256GB and the other parameters in the next line.\r\npgbench setup: -F 90 -s 5622 -c 256\r\nPgbench select-only w/ patch 662804 tps (-0.5%)\r\n w/o patch 666354 tps. \r\n tcpb-like w/ patch 35844 tps (0%)\r\n w/o patch 35835 tps\r\n\r\nWe also test with Hammerdb when evaluating patches, it shows the patch gets +3%:\r\nHammerdb (192 Warehouse 256 clients)\r\nw/ patch 1147463 NOPM (+3%)\r\nw/o patch 1112908 NOPM\r\n\r\nI've run pgbench more than once and the measured TPS values overlap, even though the means on select-only show a small degradation at the moment I am concluding it is noise. On Hammerdb, the results show a measurable difference.\r\n\r\nThanks,\r\nGeoff\r\n\r\n\r\n\r\n", "msg_date": "Mon, 13 Dec 2021 17:27:00 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Tom,\r\n\r\nHope everything is well going into the new year. I'd like to pick this discussion back up and your thoughts on the patch with the data I posted 2 weeks prior. Is there more data that would be helpful? Different setup? Data on older versions of Postgresql to ascertain if it makes more sense on versions before the large re-work of the snapshot algorithm that exhibited quite a bit of synchronization contention?\r\n\r\nThanks,\r\nGeoff\r\n\r\n\r\n", "msg_date": "Mon, 3 Jan 2022 18:11:01 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "\"Blake, Geoff\" <blakgeof@amazon.com> writes:\n> Hope everything is well going into the new year. I'd like to pick this discussion back up and your thoughts on the patch with the data I posted 2 weeks prior. Is there more data that would be helpful? Different setup? Data on older versions of Postgresql to ascertain if it makes more sense on versions before the large re-work of the snapshot algorithm that exhibited quite a bit of synchronization contention?\n\nI spent some time working on this. I don't have a lot of faith in\npgbench as a measurement testbed for spinlock contention, because over\nthe years we've done a good job of getting rid of that in our main\ncode paths (both the specific change you mention, and many others).\nAfter casting around a bit and thinking about writing a bespoke test\nframework, I landed on the idea of adding some intentional spinlock\ncontention to src/test/modules/test_shm_mq, which is a prefab test\nframework for passing data among multiple worker processes. The\nattached quick-hack patch makes it grab and release a spinlock once\nper passed message. I'd initially expected that this would show only\nmarginal changes, because you'd hope that a spinlock acquisition would\nbe reasonably cheap compared to shm_mq_receive plus shm_mq_send.\nTurns out not though.\n\nThe proposed test case is\n\n(1) patch test_shm_mq as below\n\n(2) time this query:\n\nSELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, n);\n\nfor various values of \"n\" up to about how many cores you have.\n(You'll probably need to bump up max_worker_processes.)\n\nFor context, on my Intel-based main workstation (8-core Xeon W-2245),\nthe time to do this with stock test_shm_mq is fairly level.\nReporting best-of-3 runs:\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 1);\nTime: 1386.413 ms (00:01.386)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 4);\nTime: 1302.503 ms (00:01.303)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 8);\nTime: 1373.121 ms (00:01.373)\n\nHowever, after applying the contention patch:\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 1);\nTime: 1346.362 ms (00:01.346)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 4);\nTime: 3313.490 ms (00:03.313)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 8);\nTime: 7660.329 ms (00:07.660)\n\nSo this seems like (a) it's a plausible model for code that has\nunoptimized spinlock contention, and (b) the effects are large\nenough that you needn't fret too much about measurement noise.\n\nI tried this out on a handy Apple M1 mini, which I concede\nis not big iron but it's pretty decent aarch64 hardware.\nWith current HEAD's spinlock code, I get (again best-of-3):\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 1);\nTime: 1630.255 ms (00:01.630)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 4);\nTime: 3495.066 ms (00:03.495)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 8);\nTime: 19541.929 ms (00:19.542)\n\nWith your spin-delay patch:\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 1);\nTime: 1643.524 ms (00:01.644)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 4);\nTime: 3404.625 ms (00:03.405)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 8);\nTime: 19260.721 ms (00:19.261)\n\nSo I don't see a lot of reason to think your patch changes anything.\nMaybe on something with more cores?\n\nFor grins I also tried this same test with the use-CAS-for-TAS patch\nthat was being discussed November before last, and it didn't\nreally show up as any improvement either:\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 1);\nTime: 1608.642 ms (00:01.609)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 4);\nTime: 3396.564 ms (00:03.397)\n\nregression=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, 8);\nTime: 20092.683 ms (00:20.093)\n\nMaybe that's a little better in the uncontended (single-worker)\ncase, but it's worse at the high end.\n\nI'm really curious to hear if this measurement method shows\nany interesting improvements on your hardware.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 06 Jan 2022 21:12:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Hi,\n\n> I landed on the idea of adding some intentional spinlock\n> contention to src/test/modules/test_shm_mq, which is a prefab test\n> framework for passing data among multiple worker processes. The\n> attached quick-hack patch makes it grab and release a spinlock once\n> per passed message.\n\nI wonder if this will show the full set of spinlock contention issues - isn't\nthis only causing contention for one spinlock between two processes? It's not\ntoo hard to imagine delays being more important the more processes contend for\none cacheline. I only skimmed your changes, so I might also just have\nmisunderstood what you were doing...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Jan 2022 18:33:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> I landed on the idea of adding some intentional spinlock\n>> contention to src/test/modules/test_shm_mq, which is a prefab test\n>> framework for passing data among multiple worker processes. The\n>> attached quick-hack patch makes it grab and release a spinlock once\n>> per passed message.\n\n> I wonder if this will show the full set of spinlock contention issues - isn't\n> this only causing contention for one spinlock between two processes?\n\nI don't think so -- the point of using the \"pipelined\" variant is\nthat messages are passing between all N worker processes concurrently.\n(With the proposed test, I see N processes all pinning their CPUs;\nif I use the non-pipelined API, they are busy but nowhere near 100%.)\n\nIt is just one spinlock, true, but I think the point is to gauge\nwhat happens with N processes all contending for the same lock.\nWe could add some more complexity to use multiple locks, but\ndoes that really add anything but complexity?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jan 2022 21:39:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Hi,\n\nOn 2022-01-06 21:39:57 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if this will show the full set of spinlock contention issues - isn't\n> > this only causing contention for one spinlock between two processes?\n> \n> I don't think so -- the point of using the \"pipelined\" variant is\n> that messages are passing between all N worker processes concurrently.\n> (With the proposed test, I see N processes all pinning their CPUs;\n> if I use the non-pipelined API, they are busy but nowhere near 100%.)\n\nMy understanding of the shm_mq code is that that ends up with N shm_mq\ninstances, one for each worker. After all:\n\n> * shm_mq.c\n> *\t single-reader, single-writer shared memory message queue\n\n\nThese separate shm_mq instances forward messages in a circle,\n\"leader\"->worker_1->worker_2->...->\"leader\". So there isn't a single contended\nspinlock, but a bunch of different spinlocks, each with at most two backends\naccessing it?\n\n\n> It is just one spinlock, true, but I think the point is to gauge\n> what happens with N processes all contending for the same lock.\n> We could add some more complexity to use multiple locks, but\n> does that really add anything but complexity?\n\nRight, I agree that that's what we shoudl test - it's just no immediately\nobvious to me that we are.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Jan 2022 19:13:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> These separate shm_mq instances forward messages in a circle,\n> \"leader\"->worker_1->worker_2->...->\"leader\". So there isn't a single contended\n> spinlock, but a bunch of different spinlocks, each with at most two backends\n> accessing it?\n\nNo; there's just one spinlock. I'm re-purposing the spinlock that\ntest_shm_mq uses to protect its setup operations (and thereafter\nignores). AFAICS the N+1 shm_mq instances don't internally contain\nspinlocks; they all use atomic ops.\n\n(Well, on crappy architectures maybe there's spinlocks underneath\nthe atomic ops, but I don't think we care about such cases here.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jan 2022 22:23:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "On 2022-01-06 22:23:38 -0500, Tom Lane wrote:\n> No; there's just one spinlock. I'm re-purposing the spinlock that\n> test_shm_mq uses to protect its setup operations (and thereafter\n> ignores).\n\nOh, sorry, misread :(\n\n\n> AFAICS the N+1 shm_mq instances don't internally contain\n> spinlocks; they all use atomic ops.\n\nThey contain spinlocks too, and the naming is similar enough that I got\nconfused:\nstruct shm_mq\n{\n\tslock_t\t\tmq_mutex;\n\nWe don't use them for all that much anymore though...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Jan 2022 19:37:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Tom, Andres,\r\n\r\nI spun up a 64-core Graviton2 instance (where I reported seeing improvement with this patch) and ran the provided regression test with and without my proposed on top of mainline PG. I ran 4 runs each of 63 workers where we should see the most contention and most impact from the patch. I am reporting the average and standard deviation, the average with the patch is 10% lower latency, but there is overlap in the standard deviation. I'll gather additional data at lower worker counts and post later to see what the trend is.\r\n\r\nCmd: postgres=# SELECT test_shm_mq_pipelined(16384, 'xyzzy', 10000000, workers);\r\n\r\nAvg +/- standard dev\r\n63 workers w/o patch: 552443ms +/- 22841ms\r\n63 workers w/ patch: 502727 +/- 45253ms\r\n\r\nBest results\r\nw/o patch: 521216ms\r\nw/ patch: 436442ms\r\n\r\nThanks,\r\nGeoff\r\n\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 18:34:12 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "As promised, here is the remaining data:\r\n\r\n1 worker, w/o patch: 5236 ms +/- 252ms\r\n1 worker, w/ patch: 5529 ms +/- 168ms\r\n\r\n2 worker, w/o patch: 4917 ms +/- 180ms\r\n2 worker, w/ patch: 4745 ms +/- 169ms\r\n\r\n4 worker, w/o patch: 6564 ms +/- 336ms\r\n4 worker, w/ patch: 6105 ms +/- 177ms\r\n\r\n8 worker, w/o patch: 9575 ms +/- 2375ms\r\n8 worker, w/ patch: 8115 ms +/- 391ms\r\n\r\n16 worker, w/o patch: 19367 ms +/- 3543ms\r\n16 worker, w/ patch: 18004 ms +/- 3701ms\r\n\r\n32 worker, w/o patch: 101509 ms +/- 22651ms\r\n32 worker, w/ patch: 104234 ms +/- 26821ms\r\n\r\n48 worker, w/o patch: 243329 ms +/- 70037ms\r\n48 worker, w/ patch: 189965 ms +/- 79459ms\r\n\r\n64 worker, w/o patch: 552443 ms +/- 22841ms\r\n64 worker, w/ patch: 502727 ms +/- 45253ms\r\n\r\nFrom this data, on average the patch is beneficial at high worker (CPU) counts tested: 48, 63. At 32 and below the performance is relatively close to each other. \r\n\r\nThanks,\r\nGeoff\r\n\r\n", "msg_date": "Thu, 13 Jan 2022 15:35:12 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Hi Tom, Andres,\r\n\r\nAny additional feedback for this patch?\r\n\r\nThanks,\r\nGeoff Blake\r\n\r\n", "msg_date": "Tue, 25 Jan 2022 19:57:33 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "\"Blake, Geoff\" <blakgeof@amazon.com> writes:\n> Hi Tom, Andres,\n> Any additional feedback for this patch?\n\nI did some more research and testing:\n\n* Using a Mac with the M1 Pro chip (marginally beefier than the M1\nI was testing on before), I think I can see some benefit in the\ntest case I proposed upthread. It's marginal though.\n\n* On a Raspberry Pi 3B+, there's no outside-the-noise difference.\n\n* ISB doesn't exist in pre-V7 ARM, so it seems prudent to restrict\nthe patch to ARM64. I doubt any flavor of ARM32 would be able to\nbenefit anyway. (Googling finds that MariaDB made this same\nchoice not long ago [1].)\n\nSo what we've got is that there seems to be benefit at high\ncore counts, and it at least doesn't hurt at lower ones.\nThat's good enough for me, so pushed.\n\n\t\t\tregards, tom lane\n\n[1] https://jira.mariadb.org/browse/MDEV-25807\n\n\n", "msg_date": "Wed, 06 Apr 2022 19:06:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" }, { "msg_contents": "Thanks for all the help Tom!\r\n\r\nOn 4/6/22, 6:07 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n \"Blake, Geoff\" <blakgeof@amazon.com> writes:\r\n > Hi Tom, Andres,\r\n > Any additional feedback for this patch?\r\n\r\n I did some more research and testing:\r\n\r\n * Using a Mac with the M1 Pro chip (marginally beefier than the M1\r\n I was testing on before), I think I can see some benefit in the\r\n test case I proposed upthread. It's marginal though.\r\n\r\n * On a Raspberry Pi 3B+, there's no outside-the-noise difference.\r\n\r\n * ISB doesn't exist in pre-V7 ARM, so it seems prudent to restrict\r\n the patch to ARM64. I doubt any flavor of ARM32 would be able to\r\n benefit anyway. (Googling finds that MariaDB made this same\r\n choice not long ago [1].)\r\n\r\n So what we've got is that there seems to be benefit at high\r\n core counts, and it at least doesn't hurt at lower ones.\r\n That's good enough for me, so pushed.\r\n\r\n regards, tom lane\r\n\r\n [1] https://jira.mariadb.org/browse/MDEV-25807\r\n\r\n", "msg_date": "Thu, 7 Apr 2022 13:41:23 +0000", "msg_from": "\"Blake, Geoff\" <blakgeof@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add spin_delay() implementation for Arm in s_lock.h" } ]
[ { "msg_contents": "Hi Hackers,\n\nInside the test script `src/test/recovery/t/001_stream_rep.pl`, a \ncomment at line 30 says `my_backup` is \"not mandatory\",\n\n  30 # Take backup of standby 1 (not mandatory, but useful to check if\n  31 # pg_basebackup works on a standby).\n  32 $node_standby_1->backup($backup_name);\n\nhowever if remove the backup folder \"my_backup\" after line 32, something \nlike below,\n\n  33 system_or_bail('rm', '-rf', \n'/home/user/postgres/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup');\n\nthen the test failed with message like,\n\nrecovery$ make check PROVE_TESTS='t/001_stream_rep.pl'\nt/001_stream_rep.pl .. # Looks like your test exited with 2 before it \ncould output anything.\nt/001_stream_rep.pl .. Dubious, test returned 2 (wstat 512, 0x200)\nFailed 53/53 subtests\n... ...\nResult: FAIL\nMakefile:23: recipe for target 'check' failed\nmake: *** [check] Error 1\n\nAnd then the test script takes another backup `my_backup_2`, but it \nseems this backup is not used anywhere.\n\n  35 # Take a second backup of the standby while the primary is offline.\n  36 $node_primary->stop;\n  37 $node_standby_1->backup('my_backup_2');\n  38 $node_primary->start;\n\nbecause, if I deleted the backup \"my_backup_2\" right after line 38 with \nsomething like below,\n\n39 system_or_bail('rm', '-rf', \n'/home/user/postgres/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup_2');\nthere is no impact on the test result, and it says all test cases passed.\n\nrecovery$ make check PROVE_TESTS='t/001_stream_rep.pl'\n...\nt/001_stream_rep.pl .. ok\nAll tests successful.\nFiles=1, Tests=53,  6 wallclock secs ( 0.01 usr  0.01 sys + 1.00 cusr  \n0.82 csys =  1.84 CPU)\nResult: PASS\n\nDid I misunderstand something here?\n\nThank you,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n", "msg_date": "Fri, 10 Dec 2021 13:44:40 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "Question about 001_stream_rep.pl recovery test" }, { "msg_contents": "On Fri, Dec 10, 2021 at 01:44:40PM -0800, David Zhang wrote:\n> Inside the test script `src/test/recovery/t/001_stream_rep.pl`, a comment at\n> line 30 says `my_backup` is \"not mandatory\",\n> \n>  30 # Take backup of standby 1 (not mandatory, but useful to check if\n>  31 # pg_basebackup works on a standby).\n>  32 $node_standby_1->backup($backup_name);\n> \n> however if remove the backup folder \"my_backup\" after line 32, something\n> like below,\n\nWell, the comment is not completely incorrect IMO. In this context, I\nread taht that we don't need a backup from a standby and we could just\ntake a backup from the primary instead. If I were to fix something, I\nwould suggest to reword the comment as follows:\n# Take backup of standby 1 (could be taken from the primary, but this\n# is useful to check if pg_basebackup works on a standby).\n\n> And then the test script takes another backup `my_backup_2`, but it seems\n> this backup is not used anywhere.\n\nThis had better stay around, per commit f267c1c. And as far as I can\nsee, we don't have an equivalent test in a different place, where\npg_basebackup runs on a standby while its *primary* is stopped.\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 21:05:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Question about 001_stream_rep.pl recovery test" }, { "msg_contents": "Thanks a lot Michael for the explanation.\n\nOn 2021-12-13 4:05 a.m., Michael Paquier wrote:\n> On Fri, Dec 10, 2021 at 01:44:40PM -0800, David Zhang wrote:\n>> Inside the test script `src/test/recovery/t/001_stream_rep.pl`, a comment at\n>> line 30 says `my_backup` is \"not mandatory\",\n>>\n>>  30 # Take backup of standby 1 (not mandatory, but useful to check if\n>>  31 # pg_basebackup works on a standby).\n>>  32 $node_standby_1->backup($backup_name);\n>>\n>> however if remove the backup folder \"my_backup\" after line 32, something\n>> like below,\n> Well, the comment is not completely incorrect IMO. In this context, I\n> read taht that we don't need a backup from a standby and we could just\n> take a backup from the primary instead. If I were to fix something, I\n> would suggest to reword the comment as follows:\n> # Take backup of standby 1 (could be taken from the primary, but this\n> # is useful to check if pg_basebackup works on a standby).\n>\n>> And then the test script takes another backup `my_backup_2`, but it seems\n>> this backup is not used anywhere.\n> This had better stay around, per commit f267c1c. And as far as I can\n> see, we don't have an equivalent test in a different place, where\n> pg_basebackup runs on a standby while its *primary* is stopped.\n> --\n> Michael\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n", "msg_date": "Tue, 14 Dec 2021 16:21:17 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Question about 001_stream_rep.pl recovery test" } ]
[ { "msg_contents": "Here is a patch adding range_agg(anymultirange). Previously range_agg \nonly accepted anyrange.\n\nHere is a bug report from last month requesting this addition:\n\nhttps://www.postgresql.org/message-id/CAOC8YUcOtAGscPa31ik8UEMzgn8uAWA09s6CYOGPyP9_cBbWTw%40mail.gmail.com\n\nAs that message points out, range_intersect_agg accepts either anyrange \nor anymultirange, so it makes sense for range_agg to do the same.\n\nI noticed that the docs only mentioned range_intersect_agg(anyrange), so \nI added the anymultirange versions of both on the aggregate functions page.\n\nI also added a few more tests for range_intersect_agg since the coverage \nthere seemed light.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com", "msg_date": "Fri, 10 Dec 2021 16:24:48 -0800", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "range_agg with multirange inputs" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThis applies (with some fuzz) and passes installcheck-world, but a rebase\r\nis needed, because 3 lines of context aren't enough to get the doc changes\r\nin the right place in the aggregate function table. (I think generating\r\nthe patch with 4 lines of context would be enough to keep that from being\r\na recurring issue.)\r\n\r\nOne thing that seems a bit funny is this message in the new\r\nmultirange_agg_transfn:\r\n\r\n+ if (!type_is_multirange(mltrngtypoid))\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_DATATYPE_MISMATCH),\r\n+ errmsg(\"range_agg must be called with a multirange\")));\r\n\r\nIt's clearly copied from the corresponding test and message in\r\nrange_agg_transfn. They both say \"range_agg must be called ...\", which\r\nmakes perfect sense, as from the user's perspective both messages come\r\nfrom (different overloads of) a function named range_agg.\r\n\r\nStill, it could be odd to have (again from the user's perspective)\r\na function named range_agg that sometimes says \"range_agg must be\r\ncalled with a range\" and other times says \"range_agg must be called\r\nwith a multirange\".\r\n\r\nI'm not sure how to tweak the wording (of either message or both) to\r\nmake that less weird, but there's probably a way.\r\n\r\nI kind of wonder whether either message is really reachable, at least\r\nthrough the aggregate machinery in the expected way. Won't that machinery\r\nensure that it is calling the right transfn with the right type of\r\nargument? If that's the case, maybe the messages could only be seen\r\nby someone calling the transfn directly ... which also seems ruled out\r\nfor these transfns because the state type is internal. Is this whole test\r\nmore of the nature of an assertion?\r\n\r\nRegards,\r\n-Chap\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Sun, 27 Feb 2022 01:13:54 +0000", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 2/26/22 17:13, Chapman Flack wrote:\n> This applies (with some fuzz) and passes installcheck-world, but a rebase\n> is needed, because 3 lines of context aren't enough to get the doc changes\n> in the right place in the aggregate function table. (I think generating\n> the patch with 4 lines of context would be enough to keep that from being\n> a recurring issue.)\n\nThank you for the review and the tip re 4 lines of context! Rebase attached.\n\n> One thing that seems a bit funny is this message in the new\n> multirange_agg_transfn:\n> \n> + if (!type_is_multirange(mltrngtypoid))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATATYPE_MISMATCH),\n> + errmsg(\"range_agg must be called with a multirange\")));\n\nI agree it would be more helpful to users to let them know we can take \neither kind of argument. I changed the message to \"range_agg must be \ncalled with a range or multirange\". How does that seem?\n\n> I kind of wonder whether either message is really reachable, at least\n> through the aggregate machinery in the expected way. Won't that machinery\n> ensure that it is calling the right transfn with the right type of\n> argument? If that's the case, maybe the messages could only be seen\n> by someone calling the transfn directly ... which also seems ruled out\n> for these transfns because the state type is internal. Is this whole test\n> more of the nature of an assertion?\n\nI don't think they are reachable, so perhaps they are more like asserts. \nDo you think I should change it? It seems like a worthwhile check in any \ncase.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com", "msg_date": "Mon, 28 Feb 2022 20:31:55 -0800", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 02/28/22 23:31, Paul Jungwirth wrote:\n> On 2/26/22 17:13, Chapman Flack wrote:\n>> (I think generating\n>> the patch with 4 lines of context would be enough to keep that from being\n>> a recurring issue.)\n> \n> Thank you for the review and the tip re 4 lines of context! Rebase attached.\n\nI think the 4 lines should suffice, but it looks like this patch was\ngenerated from a rebase of the old one (with three lines) that ended up\nputting the new 'range_agg' entry ahead of 'max' in func.sgml, which\nposition is now baked into the 4 lines of context. :)\n\nSo I think it needs a bit of manual attention to get the additions back\nin the right places, and then a 4-context-lines patch generated from that.\n\n> I changed the message to \"range_agg must be called\n> with a range or multirange\". How does that seem?\n\nThat works for me.\n\n>> I kind of wonder whether either message is really reachable, at least\n>> through the aggregate machinery in the expected way. Won't that machinery\n>> ensure that it is calling the right transfn with the right type of\n>> argument? If that's the case, maybe the messages could only be seen\n>> by someone calling the transfn directly ... which also seems ruled out\n>> for these transfns because the state type is internal. Is this whole test\n>> more of the nature of an assertion?\n> \n> I don't think they are reachable, so perhaps they are more like asserts. Do\n> you think I should change it? It seems like a worthwhile check in any case.\n\nI would not change them to actual Assert, which would blow up the whole\nprocess on failure. If it's a genuine \"not expected to happen\" case,\nmaybe changing it to elog (or ereport with errmsg_internal) would save\na little workload for translators. But as you were copying an existing\nereport with a translatable message, there's also an argument for sticking\nto that style, and maybe mentioning the question to an eventual committer\nwho might have a stronger opinion.\n\nI did a small double-take seeing the C range_agg_finalfn being shared\nby the SQL range_agg_finalfn and multirange_agg_finalfn. I infer that\nthe reason it works is get_fn_expr_rettype works equally well with\neither parameter type.\n\nDo you think it would be worth adding a comment at the C function\nexplaining that? In a quick query I just did, I found no other aggregate\nfinal functions sharing a C function that way, so this could be the first.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 16:33:32 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 3/1/22 13:33, Chapman Flack wrote:\n> I think the 4 lines should suffice, but it looks like this patch was\n> generated from a rebase of the old one (with three lines) that ended up\n> putting the new 'range_agg' entry ahead of 'max' in func.sgml, which\n> position is now baked into the 4 lines of context. :)\n\nYou're right, my last rebase messed up the docs. Here it is fixed. Sorry \nabout that!\n\n> I would not change them to actual Assert, which would blow up the whole\n> process on failure. If it's a genuine \"not expected to happen\" case,\n> maybe changing it to elog (or ereport with errmsg_internal) would save\n> a little workload for translators.\n\nI like the elog solution. I've changed them in both places.\n\n> I did a small double-take seeing the C range_agg_finalfn being shared\n> by the SQL range_agg_finalfn and multirange_agg_finalfn. I infer that\n> the reason it works is get_fn_expr_rettype works equally well with\n> either parameter type.\n> \n> Do you think it would be worth adding a comment at the C function\n> explaining that? In a quick query I just did, I found no other aggregate\n> final functions sharing a C function that way, so this could be the first.\n\nI see 13 other shared finalfns (using select \narray_agg(aggfnoid::regproc) as procs, array_agg(aggtransfn) as \ntransfns, aggfinalfn from pg_aggregate where aggfinalfn is distinct from \n0 group by aggfinalfn having count(*) > 1;) but a comment can't hurt! Added.\n\nThanks,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com", "msg_date": "Sat, 5 Mar 2022 12:53:15 -0800", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 03/05/22 15:53, Paul Jungwirth wrote:\n> On 3/1/22 13:33, Chapman Flack wrote:\n>> I think the 4 lines should suffice, but it looks like this patch was\n>> generated from a rebase of the old one (with three lines) that ended up\n>> putting the new 'range_agg' entry ahead of 'max' in func.sgml, which\n>> position is now baked into the 4 lines of context. :)\n> \n> You're right, my last rebase messed up the docs. Here it is fixed. Sorry\n> about that!\n\nWhen I apply this patch, I get a func.sgml with two entries for\nrange_intersect_agg(anymultirange).\n\n> I like the elog solution. I've changed them in both places.\n\nIt looks like you've now got elog in three places: the \"must be called\nwith a range or multirange\" in multirange_agg_transfn and\nmultirange_intersect_agg_transfn, and the \"called in non-aggregate\ncontext\" in multirange_agg_transfn.\n\nI think that last is also ok, given that its state type is internal,\nso it shouldn't be reachable in a user call.\n\nIn range_agg_transfn, you've changed the message in the \"must be called\nwith a range or multirange\"; that seems like another good candidate to\nbe an elog.\n\n> I see 13 other shared finalfns (using select array_agg(aggfnoid::regproc) as\n> procs, array_agg(aggtransfn) as transfns, aggfinalfn from pg_aggregate where\n> aggfinalfn is distinct from 0 group by aggfinalfn having count(*) > 1;) but\n> a comment can't hurt! Added.\n\nI think your query finds aggregate declarations that share the same SQL\nfunction declaration as their finalizer functions. That seems to be more\ncommon.\n\nThe query I used looks for cases where different SQL-declared functions\nappear as finalizers of aggregates, but the different SQL declared functions\nshare the same internal C implementation. That's the query where this seems\nto be the unique result.\n\nWITH\n finals(regp) AS (\n SELECT DISTINCT\n CAST(aggfinalfn AS regprocedure)\n FROM\n pg_aggregate\n WHERE\n aggfinalfn <> 0 -- InvalidOid\n )\nSELECT\n prosrc, array_agg(regp)\n FROM\n pg_proc, finals\n WHERE\n oid = regp AND prolang = 12 -- INTERNALlanguageId\n GROUP BY\n prosrc\n HAVING\n count(*) > 1;\n\nIn other words, I think the interesting thing to say in the C comment\nis not \"shared by range_agg(anyrange) and range_agg(anymultirange)\", but\n\"shared by range_agg_finalfn(internal,anyrange) and\nmultirange_agg_finalfn(internal,anymultirange)\".\n\nIt seems a little extra surprising to have one C function declared in SQL\nwith two different names and parameter signatures. It ends up working\nout because it relies on get_fn_expr_rettype, which can do its job for\neither polymorphic type it might find in the parameter declaration.\nBut that's a bit subtle. :)\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 10 Mar 2022 17:07:12 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 3/10/22 14:07, Chapman Flack wrote:\n> When I apply this patch, I get a func.sgml with two entries for\n> range_intersect_agg(anymultirange).\n\nArg, fixed.\n\n> In range_agg_transfn, you've changed the message in the \"must be called\n> with a range or multirange\"; that seems like another good candidate to\n> be an elog.\n\nAgreed. Updated here.\n\n> I think your query finds aggregate declarations that share the same SQL\n> function declaration as their finalizer functions. That seems to be more\n> common.\n> \n> The query I used looks for cases where different SQL-declared functions\n> appear as finalizers of aggregates, but the different SQL declared functions\n> share the same internal C implementation.\n\nOkay, I see. I believe that is quite common for ordinary SQL functions. \nSharing a prosrc seems even less remarkable than sharing an aggfinalfn. \nYou're right there are no cases for other finalfns yet, but I don't \nthink there is anything special about finalfns that would make this a \nweirder thing to do there than with ordinary functions. Still, noting it \nwith a comment does seem helpful. I've updated the remark to match what \nyou suggested.\n\nThank you again for the review, and sorry for so many iterations! :-)\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com", "msg_date": "Fri, 11 Mar 2022 19:18:50 -0800", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "On 03/11/22 22:18, Paul Jungwirth wrote:\n> Arg, fixed.\n> \n>> In range_agg_transfn, you've changed the message in the \"must be called\n>> with a range or multirange\"; that seems like another good candidate to\n>> be an elog.\n> \n> Agreed. Updated here.\n\nThis looks good to me and passes installcheck-world, so I'll push\nthe RfC button.\n\n> Sharing a prosrc seems even less remarkable than sharing an aggfinalfn.\n> You're right there are no cases for other finalfns yet, but I don't think\n> there is anything special about finalfns that would make this a weirder\n> thing to do there than with ordinary functions.\n\nYou sent me back to look at how many of those there are. I get 42 cases\nof shared prosrc (43 now).\n\nThe chief subgroup of those looks to involve sharing between parameter\nsignatures where the types have identical layouts and the semantic\ndifferences are unimportant to the function in question (comparisons\nbetween bit or between varbit, overlaps taking timestamp or timestamptz,\netc.).\n\nThe other prominent group is range and multirange constructors, where\nthe C function has an obviously generic name like range_constructor2\nand gets shared by a bunch of SQL declarations.\n\nI think here we've added the first instance where the C function is\nshared by SQL-declared functions accepting two different polymorphic\npseudotypes. But it's clearly simple and works, nothing objectionable\nabout it.\n\nI had experimented with renaming multirange_agg_finalfn to just\nrange_agg_finalfn so it would just look like two overloads of one\nfunction sharing a prosrc, and ultimately gave up because genbki.pl\ncouldn't resolve the OID where the name is used in pg_aggregate.dat.\n\nThat's why it surprised me to see three instances where other functions\n(overlaps, isfinite, name) do use the same SQL name for different\noverloads, but the explanation seems to be that nothing else at genbki\ntime refers to those, so genbki's unique-name limitation doesn't affect\nthem.\n\nNeither here nor there for this patch, but an interesting new thing\nI learned while reviewing it.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 12 Mar 2022 21:03:56 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "Fwiw the cfbot is failing due to a duplicate OID. Traditionally we\ndidn't treat duplicate OIDs as reason to reject a patch because\nthey're inevitable as other patches get committed and the committer\ncan just renumber them.\n\nI think the cfbot kind of changes this calculus since it's a pain lose\nthe visibility into whether the rest of the tests are passing that the\ncfbot normally gives us.\n\nIf it's not to much of a hassle could you renumber resubmit the patch\nwith an updated OID?\n\n[10:54:57.606] su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n[10:54:57.927] Duplicate OIDs detected:\n[10:54:57.927] 8000\n\n\n", "msg_date": "Mon, 28 Mar 2022 16:17:34 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" }, { "msg_contents": "This patch has been committed. I split it into a few pieces.\n\nOn 12.03.22 04:18, Paul Jungwirth wrote:\n> On 3/10/22 14:07, Chapman Flack wrote:\n>> When I apply this patch, I get a func.sgml with two entries for\n>> range_intersect_agg(anymultirange).\n> \n> Arg, fixed.\n> \n>> In range_agg_transfn, you've changed the message in the \"must be called\n>> with a range or multirange\"; that seems like another good candidate to\n>> be an elog.\n> \n> Agreed. Updated here.\n\nI kept those messages as \"range\" or \"multirange\" separately, instead of \n\"range or multirange\". This way, we don't have to update all the \nmessages of this kind when a new function is added. Since these are \nonly internal messages anyway, I opted for higher maintainability.\n\n\n", "msg_date": "Wed, 30 Mar 2022 20:43:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: range_agg with multirange inputs" } ]
[ { "msg_contents": "Hi,\n\nWhen writing / debugging an isolation test it's sometimes useful to see which\nsession holds what lock etc. I find it kind of painful to map pg_stat_activity\n/ pg_locks / log output to the isolationtester spec. Sometimes its easy enough\nto infer identity based on a statement, but far from all the time.\n\nI found it very helpful to have each session's setup step do something like\n SET application_name = 'isolation/prune-recently-dead/vac';\n\nThese days isolationtester.c already prefixes log output with the session\nname. How about doing the same for application_name? It's a *tad* more\ncomplicated than I'd like because isolationtester.c currently doesn't know the\nname of the test its executing.\n\nThe attached patch executes\n SELECT set_config('application_name', current_setting('application_name') || '/' || $1, false);\nwhen establishing connections to deal with that.\n\nAs attached this appends \"control connection\" for the control connection, but\nperhaps we should just not append anything for that?\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 10 Dec 2021 17:20:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "isolationtester: add session name to application name" }, { "msg_contents": "\nOn 12/10/21 20:20, Andres Freund wrote:\n> The attached patch executes\n> SELECT set_config('application_name', current_setting('application_name') || '/' || $1, false);\n> when establishing connections to deal with that.\n\n\nSounds good\n\n\n>\n> As attached this appends \"control connection\" for the control connection, but\n> perhaps we should just not append anything for that?\n>\n\n\n\n\"control connection\" seems reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 11 Dec 2021 08:12:55 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: isolationtester: add session name to application name" }, { "msg_contents": "On Fri, Dec 10, 2021 at 05:20:52PM -0800, Andres Freund wrote:\n> These days isolationtester.c already prefixes log output with the session\n> name. How about doing the same for application_name? It's a *tad* more\n> complicated than I'd like because isolationtester.c currently doesn't know the\n> name of the test its executing.\n\n+1 for the idea. Maybe it could be backpatched? It could be really\nuseful to have the same amount of details across all the stable\nbranches to ease any future backpatch of a test. It does not seem to\nme that many people would rely much on application_name in out-of-core\ntest, but if that's the case such tests would suddenly break after a\nthe next minor upgrade.\n\n> As attached this appends \"control connection\" for the control connection, but\n> perhaps we should just not append anything for that?\n\nKeeping \"control connection\" seems is fine for me for these.\n\n> +\t\t * easier to map spec file sesions to log output and\n\nOne s/sesions/sessions/ here.\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 19:46:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: isolationtester: add session name to application name" }, { "msg_contents": "Hi,\n\nOn 2021-12-13 19:46:34 +0900, Michael Paquier wrote:\n> On Fri, Dec 10, 2021 at 05:20:52PM -0800, Andres Freund wrote:\n> > These days isolationtester.c already prefixes log output with the session\n> > name. How about doing the same for application_name? It's a *tad* more\n> > complicated than I'd like because isolationtester.c currently doesn't know the\n> > name of the test its executing.\n> \n> +1 for the idea. Maybe it could be backpatched?\n\nNot entirely trivially - the changes have some dependencies on other changes\n(e.g. b1907d688, more on 741d7f104, but that was backpatched). I guess we\ncould backpatch b1907d688 as well, but I'm not sure its worth it?\n\n\n> > +\t\t * easier to map spec file sesions to log output and\n> \n> One s/sesions/sessions/ here.\n\nAh, good catch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:48:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: isolationtester: add session name to application name" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-12-13 19:46:34 +0900, Michael Paquier wrote:\n>> +1 for the idea. Maybe it could be backpatched?\n\n> Not entirely trivially - the changes have some dependencies on other changes\n> (e.g. b1907d688, more on 741d7f104, but that was backpatched). I guess we\n> could backpatch b1907d688 as well, but I'm not sure its worth it?\n\nI think we've more recently had the idea that isolationtester features\nshould be back-patched to avoid gotchas when back-patching test cases.\nFor instance, all the isolationtester work I did this past summer was\nback-patched. So from that vantage point, back-patching b1907d688\nseems fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Dec 2021 13:57:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: isolationtester: add session name to application name" }, { "msg_contents": "Hi,\n\nOn 2021-12-13 13:57:52 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-12-13 19:46:34 +0900, Michael Paquier wrote:\n> >> +1 for the idea. Maybe it could be backpatched?\n>\n> > Not entirely trivially - the changes have some dependencies on other changes\n> > (e.g. b1907d688, more on 741d7f104, but that was backpatched). I guess we\n> > could backpatch b1907d688 as well, but I'm not sure its worth it?\n>\n> I think we've more recently had the idea that isolationtester features\n> should be back-patched to avoid gotchas when back-patching test cases.\n> For instance, all the isolationtester work I did this past summer was\n> back-patched. So from that vantage point, back-patching b1907d688\n> seems fine.\n\nSince there seems support for that approach, I've backpatched b1907d688 and\nwill push application_name isolationtester change once running the tests\nacross all branches finishes locally.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Mon, 13 Dec 2021 12:01:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: isolationtester: add session name to application name" } ]
[ { "msg_contents": "Hi,\n\nI would like to have a statically linked linux x64 version of postgresql\n14.1 for creating a distributable sample database.\n\nI could not find it anywhere for downloading (someone has a link maybe?),\nso I tried to build one from the sources.\n\nI am following the instructions on\nhttps://www.postgresql.org/docs/14/install-short.html and to get a\nstatically linked binary, I add the LDFLAGS='--static' argument to the\nconfigure statement.\nI guess that is the way do this.\n\nIn stead of building on my laptop, I use docker so that building does not\ndepend on what I have installed.\nWith this Dockerfile building fails.\n\n==== Dockerfile ====\nFROM ubuntu:20.04\nRUN apt-get update && apt-get install -y build-essential libreadline-dev\nzlib1g-dev\nCOPY postgresql-14.1.tar.gz .\nRUN tar -xvzf postgresql-14.1.tar.gz\nRUN rm postgresql-14.1.tar.gz\nWORKDIR postgresql-14.1\nRUN ./configure LDFLAGS='--static'\nRUN make\nRUN make install\nRUN adduser postgres\nRUN mkdir /usr/local/pgsql/data\nRUN chown postgres /usr/local/pgsql/data\nUSER postgres\nRUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n==== end of Dockerfile ====\n\n\n==== output from docker build ====\n.......\nmake -C backend/utils/mb/conversion_procs all\nmake[2]: Entering directory\n'/postgresql-14.1/src/backend/utils/mb/conversion_procs'\nmake -C cyrillic_and_mic all\nmake[3]: Entering directory\n'/postgresql-14.1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC\n-I../../../../../../src/include -D_GNU_SOURCE -c -o cyrillic_and_mic.o\ncyrillic_and_mic.c\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -shared -o\ncyrillic_and_mic.so cyrillic_and_mic.o -L../../../../../../src/port\n-L../../../../../../src/common --static -Wl,--as-needed\n/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginT.o: relocation\nR_X86_64_32 against hidden symbol `__TMC_END__' can not be used when making\na shared object\ncollect2: error: ld returned 1 exit status\nmake[3]: *** [../../../../../../src/Makefile.shlib:293:\ncyrillic_and_mic.so] Error 1\nmake[3]: Leaving directory\n'/postgresql-14.1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'\nmake[2]: *** [Makefile:25: all-cyrillic_and_mic-recurse] Error 2\nmake[2]: Leaving directory\n'/postgresql-14.1/src/backend/utils/mb/conversion_procs'\nmake[1]: *** [Makefile:42: all-backend/utils/mb/conversion_procs-recurse]\nError 2\nmake[1]: Leaving directory '/postgresql-14.1/src'\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\nThe command '/bin/sh -c make' returned a non-zero code: 2\n==== end of output from docker build ====\n\n\nWhy does this error happen?\n\nI seem to be able to work around it by copying some libs: copy\n/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o to\n/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginT.o\nA new attempt with this Dockerfile does build the software, but when it\nthen fails with initdb:\n\n\n==== 2nd Dockerfile ====\nFROM ubuntu:20.04\nRUN apt-get update && apt-get install -y build-essential libreadline-dev\nzlib1g-dev\nCOPY postgresql-14.1.tar.gz .\nRUN tar -xvzf postgresql-14.1.tar.gz\nRUN rm postgresql-14.1.tar.gz\nWORKDIR postgresql-14.1\nRUN cp /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o\n/usr/lib/gcc/x86_64-linux-gnu/9/crtbeginT.o\nRUN ./configure LDFLAGS='--static'\nRUN make\nRUN make install\nRUN adduser postgres\nRUN mkdir /usr/local/pgsql/data\nRUN chown postgres /usr/local/pgsql/data\nUSER postgres\nRUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n==== end of 2nd Dockerfile ====\n\nInitdb output:\n\n======== 2nd docker build output =======\n......\nStep 15/15 : RUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n ---> Running in 8c4fc574766e\nThe files belonging to this database system will be owned by user\n\"postgres\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"C\".\nThe default database encoding has accordingly been set to \"SQL_ASCII\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\n\nfixing permissions on existing directory /usr/local/pgsql/data ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... sysv\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default time zone ... UTC\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... 2021-12-11 19:08:58.765 UTC\n[16] FATAL: could not load library\n\"/usr/local/pgsql/lib/dict_snowball.so\":\n/usr/local/pgsql/lib/dict_snowball.so: undefined symbol:\nCurrentMemoryContext\n2021-12-11 19:08:58.765 UTC [16] STATEMENT: CREATE FUNCTION\ndsnowball_init(INTERNAL)\n RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'\n LANGUAGE C STRICT;\n\nchild process exited with exit code 1\ninitdb: removing contents of data directory \"/usr/local/pgsql/data\"\nThe command '/bin/sh -c /usr/local/pgsql/bin/initdb -D\n/usr/local/pgsql/data' returned a non-zero code: 1\n======== end of 2nd docker build output =======\n\n\nAnybody has an idea why this does not work or what I am doing wrong and how\nI can get a statically linked binary?\n\nWithout the LDFLAGS the build and the initdb work fine.\n\n\nThanks in advance,\n\nRob\n\nHi,I would like to have a statically linked linux x64 version of postgresql 14.1 for creating a distributable sample database.I could not find it anywhere for downloading (someone has a link maybe?), so I tried to build one from the sources.I am following the instructions on https://www.postgresql.org/docs/14/install-short.html and to get a statically linked binary, I add the LDFLAGS='--static' argument to the configure statement.I guess that is the way do this.In stead of building on my laptop, I use docker so that building does not depend on what I have installed.With this Dockerfile building fails.==== Dockerfile ====FROM ubuntu:20.04RUN  apt-get update && apt-get install -y build-essential libreadline-dev zlib1g-devCOPY postgresql-14.1.tar.gz .RUN tar -xvzf postgresql-14.1.tar.gzRUN rm postgresql-14.1.tar.gzWORKDIR postgresql-14.1RUN ./configure LDFLAGS='--static'RUN makeRUN make installRUN adduser postgresRUN mkdir /usr/local/pgsql/dataRUN chown postgres /usr/local/pgsql/dataUSER postgresRUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data==== end of Dockerfile ======== output from docker build ====.......make -C backend/utils/mb/conversion_procs allmake[2]: Entering directory '/postgresql-14.1/src/backend/utils/mb/conversion_procs'make -C cyrillic_and_mic allmake[3]: Entering directory '/postgresql-14.1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../../../../../src/include  -D_GNU_SOURCE   -c -o cyrillic_and_mic.o cyrillic_and_mic.cgcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -shared -o cyrillic_and_mic.so cyrillic_and_mic.o  -L../../../../../../src/port -L../../../../../../src/common  --static  -Wl,--as-needed   /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginT.o: relocation R_X86_64_32 against hidden symbol `__TMC_END__' can not be used when making a shared objectcollect2: error: ld returned 1 exit statusmake[3]: *** [../../../../../../src/Makefile.shlib:293: cyrillic_and_mic.so] Error 1make[3]: Leaving directory '/postgresql-14.1/src/backend/utils/mb/conversion_procs/cyrillic_and_mic'make[2]: *** [Makefile:25: all-cyrillic_and_mic-recurse] Error 2make[2]: Leaving directory '/postgresql-14.1/src/backend/utils/mb/conversion_procs'make[1]: *** [Makefile:42: all-backend/utils/mb/conversion_procs-recurse] Error 2make[1]: Leaving directory '/postgresql-14.1/src'make: *** [GNUmakefile:11: all-src-recurse] Error 2The command '/bin/sh -c make' returned a non-zero code: 2==== end of output from docker build ====Why does this error happen?I seem to be able to work around it by copying some libs: copy /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o to /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginT.oA new attempt with this Dockerfile does build the software, but when it then fails with initdb:==== 2nd Dockerfile ====FROM ubuntu:20.04\nRUN  apt-get update && apt-get install -y build-essential libreadline-dev zlib1g-dev\nCOPY postgresql-14.1.tar.gz .\nRUN tar -xvzf postgresql-14.1.tar.gz\nRUN rm postgresql-14.1.tar.gz\nWORKDIR postgresql-14.1\nRUN cp /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginS.o /usr/lib/gcc/x86_64-linux-gnu/9/crtbeginT.o\nRUN ./configure LDFLAGS='--static'\nRUN make\nRUN make install\nRUN adduser postgres\nRUN mkdir /usr/local/pgsql/data\nRUN chown postgres /usr/local/pgsql/data\nUSER postgres\nRUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data==== end of 2nd Dockerfile ====Initdb output:======== 2nd docker build output =======......Step 15/15 : RUN /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data ---> Running in 8c4fc574766eThe files belonging to this database system will be owned by user \"postgres\".This user must also own the server process.The database cluster will be initialized with locale \"C\".The default database encoding has accordingly been set to \"SQL_ASCII\".The default text search configuration will be set to \"english\".Data page checksums are disabled.fixing permissions on existing directory /usr/local/pgsql/data ... okcreating subdirectories ... okselecting dynamic shared memory implementation ... sysvselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting default time zone ... UTCcreating configuration files ... okrunning bootstrap script ... okperforming post-bootstrap initialization ... 2021-12-11 19:08:58.765 UTC [16] FATAL:  could not load library \"/usr/local/pgsql/lib/dict_snowball.so\": /usr/local/pgsql/lib/dict_snowball.so: undefined symbol: CurrentMemoryContext2021-12-11 19:08:58.765 UTC [16] STATEMENT:  CREATE FUNCTION dsnowball_init(INTERNAL)            RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'        LANGUAGE C STRICT;child process exited with exit code 1initdb: removing contents of data directory \"/usr/local/pgsql/data\"The command '/bin/sh -c /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data' returned a non-zero code: 1======== end of 2nd docker build output =======Anybody has an idea why this does not work or what I am doing wrong and how I can get a statically linked binary?Without the LDFLAGS the build and the initdb work fine.Thanks in advance,Rob", "msg_date": "Sat, 11 Dec 2021 21:06:28 +0100", "msg_from": "Rob Gansevles <rgansevles@gmail.com>", "msg_from_op": true, "msg_subject": "Building postgresql from sources, statically linked, linux" }, { "msg_contents": "Rob Gansevles <rgansevles@gmail.com> writes:\n> I would like to have a statically linked linux x64 version of postgresql\n> 14.1 for creating a distributable sample database.\n\nIf you are trying to make the server into a monolithic object, that's\nnot something we support or have any particular interest in supporting.\nExtensions such as PLs are always built as shared libraries. So are\nthe encoding conversion modules, which your build is failing on before\nit gets to any others. It would take a good deal of fooling around\nto change that, both in the build process and in the way that the\ncore server invokes that functionality.\n\nStatically linking client-side programs such as psql is more within\nthe realm of feasibility, but you'd have to adjust just those builds\nrather than trying to apply --static across the board.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Dec 2021 17:00:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building postgresql from sources, statically linked, linux" } ]
[ { "msg_contents": "Hi,\n\nWhile porting some new IO code to lots of OSes I noticed in passing\nthat there is now a way to do synchronous fdatasync() on Windows.\nThis mechanism doesn't have an async variant, which is what I was\nactually looking for (which turns out to doable with bleeding edge\nIoRings, more on that later), but I figured this might be useful\nanyway. I see that at least one other open source database has\ndiscovered it and seen speedups. Like some other file API\nimprovements discussed recently, it's Windows 10+ and NTFS only. I\ntried out a quick POC patch and it runs a bit faster than fsync(), as\nexpected. I'm not sure if it's worth bothering with or not given the\nother options, but figured it was worth sharing.\n\nWhile testing that I also couldn't resist adding an extra output line\nto pg_test_fsync to run open_datasync in buffered I/O mode, like\nPostgreSQL actually does in real life. I guess I should really change\nit to duplicate less code, though...\n\n[1] https://www.postgresql.org/message-id/flat/1527846213.2475.31.camel%40cybertec.at", "msg_date": "Sun, 12 Dec 2021 15:48:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Windows now has fdatasync()" }, { "msg_contents": "Great. File sync is a Nice extension for me, as I don't know all file\nstructures.\n\nThomas Munro <thomas.munro@gmail.com> schrieb am So., 12. Dez. 2021, 03:48:\n\n> Hi,\n>\n> While porting some new IO code to lots of OSes I noticed in passing\n> that there is now a way to do synchronous fdatasync() on Windows.\n> This mechanism doesn't have an async variant, which is what I was\n> actually looking for (which turns out to doable with bleeding edge\n> IoRings, more on that later), but I figured this might be useful\n> anyway. I see that at least one other open source database has\n> discovered it and seen speedups. Like some other file API\n> improvements discussed recently, it's Windows 10+ and NTFS only. I\n> tried out a quick POC patch and it runs a bit faster than fsync(), as\n> expected. I'm not sure if it's worth bothering with or not given the\n> other options, but figured it was worth sharing.\n>\n> While testing that I also couldn't resist adding an extra output line\n> to pg_test_fsync to run open_datasync in buffered I/O mode, like\n> PostgreSQL actually does in real life. I guess I should really change\n> it to duplicate less code, though...\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/1527846213.2475.31.camel%40cybertec.at\n>\n\nGreat. File sync is a Nice extension for me, as I don't know all file structures.Thomas Munro <thomas.munro@gmail.com> schrieb am So., 12. Dez. 2021, 03:48:Hi,\n\nWhile porting some new IO code to lots of OSes I noticed in passing\nthat there is now a way to do synchronous fdatasync() on Windows.\nThis mechanism doesn't have an async variant, which is what I was\nactually looking for (which turns out to doable with bleeding edge\nIoRings, more on that later), but I figured this might be useful\nanyway.  I see that at least one other open source database has\ndiscovered it and seen speedups.  Like some other file API\nimprovements discussed recently, it's Windows 10+ and NTFS only.  I\ntried out a quick POC patch and it runs a bit faster than fsync(), as\nexpected.  I'm not sure if it's worth bothering with or not given the\nother options, but figured it was worth sharing.\n\nWhile testing that I also couldn't resist adding an extra output line\nto pg_test_fsync to run open_datasync in buffered I/O mode, like\nPostgreSQL actually does in real life.  I guess I should really change\nit to duplicate less code, though...\n\n[1] https://www.postgresql.org/message-id/flat/1527846213.2475.31.camel%40cybertec.at", "msg_date": "Sun, 12 Dec 2021 07:07:36 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sun, Dec 12, 2021 at 3:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [...] I\n> tried out a quick POC patch and it runs a bit faster than fsync(), as\n> expected. I'm not sure if it's worth bothering with or not given the\n> other options, but figured it was worth sharing.\n\nOne reason to consider developing this further is the problem\ndiscussed in the aptly named thread \"Loaded footgun open_datasync on\nWindows\"[1] (not the problem that was fixed in pg_test_fsync, but the\nproblem with cache control, or lack thereof). I saw 10x more TPS with\nopen_datasync than with this experimental fdatasync on my little test\nVM, which was more than a little fishy, so I turned off the device\nwrite caching in \"Device Manager\" and got about the same number from\nopen_datasync and fdatasync. Clearly you can lose committed\ntransactions on power loss[2] with the default OS settings and default\nPostgreSQL settings, though perhaps only if you're on SATA storage due\nto lack of FUA pass-through[3] (?). I didn't try an NVMe stack.\n\nThat suggests that fdatasync would actually be a better default ...\nexcept for the problems already mentioned with versions and not\nworking on non-NTFS (not sure how it fails on non-NTFS, though, maybe\nit does a full flush, [4] doesn't say).\n\n(The patch is a little rough; I couldn't figure out the headers to get\nthat macro. Insert my usual disclaimer that I'm not a Windows guy,\nthis is stuff I'm just figuring out, all clues welcome...)\n\n[1] https://www.postgresql.org/message-id/flat/1527846213.2475.31.camel%40cybertec.at\n[2] https://github.com/MicrosoftDocs/feedback/issues/2747\n[3] https://devblogs.microsoft.com/oldnewthing/20170510-00/?p=95505\n[4] https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-ntflushbuffersfileex\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:31:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "I added a commitfest entry for this to try to attract Windows-hacker\nreviews. I wondered about adjusting it to run on older systems, but I\nthink we're about ready to drop support for Windows < 10 anyway, so\nmaybe I'll go and propose that separately, instead.\n\nhttps://commitfest.postgresql.org/37/3530/\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:24:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sun, Dec 12, 2021 at 4:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> One reason to consider developing this further is the problem\n> discussed in the aptly named thread \"Loaded footgun open_datasync on\n> Windows\"[1] (not the problem that was fixed in pg_test_fsync, but the\n> problem with cache control, or lack thereof). I saw 10x more TPS with\n> open_datasync than with this experimental fdatasync on my little test\n> VM, which was more than a little fishy, so I turned off the device\n> write caching in \"Device Manager\" and got about the same number from\n> open_datasync and fdatasync. Clearly you can lose committed\n> transactions on power loss[2] with the default OS settings and default\n> PostgreSQL settings, though perhaps only if you're on SATA storage due\n> to lack of FUA pass-through[3] (?). I didn't try an NVMe stack.\n>\n> That suggests that fdatasync would actually be a better default ...\n> except for the problems already mentioned with versions and not\n> working on non-NTFS (not sure how it fails on non-NTFS, though, maybe\n> it does a full flush, [4] doesn't say).\n\nSo my impression is that today we ship defaults that are unsafe on\nWindows. I don't really understand much of what you are saying here,\nbut if there's a way we can stop doing that, +1 from me, especially if\nit allows us to retain reasonable performance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 08:10:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sat, Feb 5, 2022 at 2:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> So my impression is that today we ship defaults that are unsafe on\n> Windows. I don't really understand much of what you are saying here,\n> but if there's a way we can stop doing that, +1 from me, especially if\n> it allows us to retain reasonable performance.\n\nThe PostgreSQL default in combination with the Windows default is\nunsafe on SATA drives, but <get-out-clause>if you read our\ndocumentation carefully you might figure out that you need to disable\nwrite caching on your disk</>.\nhttps://www.postgresql.org/docs/14/wal-reliability.html says:\n\n\"Consumer-grade IDE and SATA drives are particularly likely to have\nwrite-back caches that will not survive a power failure. Many\nsolid-state drives (SSD) also have volatile write-back caches. [...]\nOn Windows, if wal_sync_method is open_datasync (the default), write\ncaching can be disabled by unchecking My Computer\\Open\\disk\ndrive\\Properties\\Hardware\\Properties\\Policies\\Enable write caching on\nthe disk. Alternatively, set wal_sync_method to fsync or\nfsync_writethrough, which prevent write caching.\"\n\nI'm not proposing we change our default to this new level, because it\ndoesn't work on non-NTFS, an annoying complication. This patch would\njust provide something faster to put after \"Alternatively\".\n\n(Actually that whole page needs a refresh. IDE is gone. The\ndiscussion about \"recent\" support for flushing caches is a bit out of\ndate, and in fact the problem with open_datasync on this OS is because\nof problems with drivers and\nhttps://en.wikipedia.org/wiki/Disk_buffer#Force_Unit_Access_(FUA), not\nFLUSH CACHE EXT.)\n\nHere's an updated patch that fixes some silly problems seen on CI.\nThere's something a bit clunkly/weird about this HAVE_FDATASYNC stuff,\nmaybe I can find a tidier way, but it's enough for experimentation:\n\nFor Mingw, I unconditionally add src/port/fdatasync.o to LIBOBJS, and\nI unconditionally #define HAVE_FDATASYNC in win32_port.h, and I also\nchanged c.h's declaration of fdatasync() because it comes before\nport.h is included (I guess I could move it down instead?).\n\nFor MSVC, I unconditionally add fdatasync.o to @pgportfiles, and\nHAVE_FDATASYNC is defined in Solution.pm.\n\nIt'd be interesting to see pg_test_fsync.exe output on real hardware.\nHere's what a little Windows 10 VM with a virtual SATA drive says:\n\nC:\\Users\\thmunro>c:\\pg\\bin\\pg_test_fsync.exe\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync (direct) 7914.872 ops/sec 126 usecs/op\n open_datasync (buffered) 6593.056 ops/sec 152 usecs/op\n fdatasync 650.317 ops/sec 1538 usecs/op\n fsync 512.423 ops/sec 1952 usecs/op\n fsync_writethrough 550.881 ops/sec 1815 usecs/op\n open_sync (direct) n/a\n open_sync (buffered) n/a", "msg_date": "Sat, 5 Feb 2022 10:23:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Fri, Feb 4, 2022 at 4:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'm not proposing we change our default to this new level, because it\n> doesn't work on non-NTFS, an annoying complication. This patch would\n> just provide something faster to put after \"Alternatively\".\n\nHmm. I thought NTFS had kind of won the filesystem war on the Windows\nside of things. No?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 18:54:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sat, Feb 5, 2022 at 12:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Feb 4, 2022 at 4:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I'm not proposing we change our default to this new level, because it\n> > doesn't work on non-NTFS, an annoying complication. This patch would\n> > just provide something faster to put after \"Alternatively\".\n>\n> Hmm. I thought NTFS had kind of won the filesystem war on the Windows\n> side of things. No?\n\nAgainst FAT, yes, but there are also SMB/CIFS (network) and the new\nReFS (which we recently broke and then unbroke[1]). I haven't tried\nthose things, lacking general Windows-fu, but I suppose they'd reject\nthis and we'd panic, because the docs say \"file systems supported:\nNTFS\"[2].\n\n[1] https://www.postgresql.org/message-id/flat/16854-905604506e23d5c0%40postgresql.org\n[2] https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-ntflushbuffersfileex\n\n\n", "msg_date": "Sat, 5 Feb 2022 14:46:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sun, Dec 12, 2021 at 03:48:10PM +1300, Thomas Munro wrote:\n> I tried out a quick POC patch and it runs a bit faster than fsync(), as\n> expected.\n\nGood news, as a too high difference would be suspect :)\n\nHow much difference does it make in % and are the numbers rather\nreproducible? Just wondering..\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 15:20:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Sun, Feb 6, 2022 at 7:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Dec 12, 2021 at 03:48:10PM +1300, Thomas Munro wrote:\n> > I tried out a quick POC patch and it runs a bit faster than fsync(), as\n> > expected.\n>\n> Good news, as a too high difference would be suspect :)\n>\n> How much difference does it make in % and are the numbers rather\n> reproducible? Just wondering..\n\nI've only tested on a qemu/kvm virtual machine with a virtual SATA\ndisk device, so take this with a bucket of salt, but I think that's\nenough to see the impact of 'slow' SATA commands hitting the device\nand being waited for, and what I see is that wal_sync_method=fdatasync\ndoes about 25% more TPS than wal_sync_method=fsync, and\nwal_sync_method=open_datasync is a wildly higher number that I don't\nbelieve (ie I don't believe it waited for power loss durability and\nthe links above support that understanding), but tumbles back to earth\nand almost exactly matches the wal_sync_method=fdatasync number when\nthe write cache is disabled.\n\n\n", "msg_date": "Fri, 11 Feb 2022 11:12:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "I've bumped this to the next cycle, so I can hopefully skip the\nmissing version detection stuff that I have no way to test (no CI, no\nbuild farm, and I have zero interest in dumpster diving for Windows 7\nor whatever installations).\n\nI propose that we drop support for Windows versions older than\n10/Server 2016 in the PostgreSQL 16 cycle, because the OS patches for\neverything older come to an end in October next year[1], and we have a\nlot of patches relating to modern Windows features that stall on\ndetails about old systems that no one actually has.\n\n[1] https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions\n\n\n", "msg_date": "Fri, 8 Apr 2022 14:56:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:\n> I propose that we drop support for Windows versions older than\n> 10/Server 2016 in the PostgreSQL 16 cycle, because the OS patches for\n> everything older come to an end in October next year[1], and we have a\n> lot of patches relating to modern Windows features that stall on\n> details about old systems that no one actually has.\n> \n> [1] https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions\n\nDo you think that we could raise the minimum C standard on WIN32 to\nC11, at least for MSVC? There is a patch floating around to add\npg_attribute_aligned() and perhaps pg_attribute_noreturn() for MSVC:\nhttps://www.postgresql.org/message-id/Yk6UgCGlZKuxRr4n@paquier.xyz\n\nnoreturn() needed at least C11:\nhttps://docs.microsoft.com/en-us/cpp/c-language/noreturn?view=msvc-140\n\nPerhaps we'd better also bump up the minimum version of MSVC\nsupported..\n--\nMichael", "msg_date": "Fri, 8 Apr 2022 13:35:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:\n>> I propose that we drop support for Windows versions older than\n>> 10/Server 2016 in the PostgreSQL 16 cycle,\n\nDo we have any data on what people are actually using?\n\n> Do you think that we could raise the minimum C standard on WIN32 to\n> C11, at least for MSVC?\n\nAs long as the C11-isms are in MSVC-only code, it seems like this is\nexactly equivalent to setting a minimum MSVC version. I don't see\nan objection-in-principle there, it's just a practical question of\nhow far back is reasonable to support MSVC versions. (That's very\ndistinct from how far back we need the built code to run.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Apr 2022 00:40:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Fri, Apr 08, 2022 at 12:40:55AM -0400, Tom Lane wrote:\n> As long as the C11-isms are in MSVC-only code, it seems like this is\n> exactly equivalent to setting a minimum MSVC version. I don't see\n> an objection-in-principle there, it's just a practical question of\n> how far back is reasonable to support MSVC versions. (That's very\n> distinct from how far back we need the built code to run.)\n\nGood question. Older versions of VS are available, so this is not a\nproblem:\nhttps://visualstudio.microsoft.com/vs/older-downloads/\n\nI think that we should at least drop 2013, as there is a bunch of\nstuff related to _MSC_VER < 1900 that could be removed with that,\nparticularly for locales.\n--\nMichael", "msg_date": "Fri, 8 Apr 2022 16:10:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Fri, 8 Apr 2022 at 05:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:\n> >> I propose that we drop support for Windows versions older than\n> >> 10/Server 2016 in the PostgreSQL 16 cycle,\n>\n> Do we have any data on what people are actually using?\n>\n\nNone that I know of. Anecdotally, we dropped support for pgAdmin on Windows\n< 8 (2012 for the server edition), and had a single complaint - and the\nuser happily acknowledged they were on an old release and expected support\nto be dropped sooner or later. Windows 8 was a pretty unpopular release, so\nI would expect shifting to 10/2016+ for PG 16 would be unlikely to be a\nmajor problem.\n\nFWIW, Python dropped support for < 8/2012 with v3.9.\n\n\n>\n> > Do you think that we could raise the minimum C standard on WIN32 to\n> > C11, at least for MSVC?\n>\n> As long as the C11-isms are in MSVC-only code, it seems like this is\n> exactly equivalent to setting a minimum MSVC version. I don't see\n> an objection-in-principle there, it's just a practical question of\n> how far back is reasonable to support MSVC versions. (That's very\n> distinct from how far back we need the built code to run.)\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Fri, 8 Apr 2022 at 05:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:\n>> I propose that we drop support for Windows versions older than\n>> 10/Server 2016 in the PostgreSQL 16 cycle,\n\nDo we have any data on what people are actually using?None that I know of. Anecdotally, we dropped support for pgAdmin on Windows < 8 (2012 for the server edition), and had a single complaint - and the user happily acknowledged they were on an old release and expected support to be dropped sooner or later. Windows 8 was a pretty unpopular release, so I would expect shifting to 10/2016+ for PG 16 would be unlikely to be a major problem.FWIW, Python dropped support for < 8/2012 with v3.9. \n\n> Do you think that we could raise the minimum C standard on WIN32 to\n> C11, at least for MSVC?\n\nAs long as the C11-isms are in MSVC-only code, it seems like this is\nexactly equivalent to setting a minimum MSVC version.  I don't see\nan objection-in-principle there, it's just a practical question of\nhow far back is reasonable to support MSVC versions.  (That's very\ndistinct from how far back we need the built code to run.)\n\n                        regards, tom lane\n\n\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Fri, 8 Apr 2022 08:56:25 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Fri, Apr 8, 2022 at 7:56 PM Dave Page <dpage@pgadmin.org> wrote:\n> Windows 8 was a pretty unpopular release, so I would expect shifting to 10/2016+ for PG 16 would be unlikely to be a major problem.\n\nThanks to Michael for making that happen. That removes the main thing\nI didn't know how to deal with in this patch. Here's a rebase with\nsome cleanup.\n\nWith my garbage collector hat on, I see that all systems we target\nhave fdatasync(), except:\n\n1. Windows, but this patch supplies src/port/fdatasync.c.\n2. DragonflyBSD before 6.1. We have 6.0 in the build farm.\n3. Ancient macOS. Current releases have it, though we have to cope\nwith a missing declaration.\n\n From a standards point of view, fdatasync() is issue 5 POSIX like\nfsync(). Both are optional, but, being a database, we require\nfsync(), and they're both covered by the same POSIX option\n\"Synchronized Input and Output\".\n\nMy plan now is to commit this patch so that problem #1 is solved, prod\nconchuela's owner to upgrade to solve #2, and wait until Tom shuts\ndown prairiedog to solve #3. Then we could consider removing the\nHAVE_FDATASYNC probe and associated #ifdefs when convenient. For that\nreason, I'm not too bothered about the slight weirdness of defining\nHAVE_FDATASYNC on Windows even though that doesn't come from\nconfigure; it'd hopefully be short-lived. Better ideas welcome,\nthough. Does that make sense?", "msg_date": "Mon, 18 Jul 2022 15:26:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> With my garbage collector hat on, I see that all systems we target\n> have fdatasync(), except:\n\n> 1. Windows, but this patch supplies src/port/fdatasync.c.\n> 2. DragonflyBSD before 6.1. We have 6.0 in the build farm.\n> 3. Ancient macOS. Current releases have it, though we have to cope\n> with a missing declaration.\n\nHmmm ... according to [1], while current macOS has an undocumented\nfdatasync function, it doesn't seem to do anything as useful as,\nsay, sync data to disk. I'm not sure what direction you're headed\nin here, but it probably shouldn't include assuming that fdatasync\nis actually useful on macOS. But maybe that's not your point?\n\n> My plan now is to commit this patch so that problem #1 is solved, prod\n> conchuela's owner to upgrade to solve #2, and wait until Tom shuts\n> down prairiedog to solve #3.\n\nYou could force my hand by pushing something that requires this ;-).\nI'm not feeling particularly wedded to prairiedog per se. As with\nmy once-and-future HPPA machine, I'd prefer to wait until NetBSD 10\nis a thing before spinning up an official buildfarm animal, but\nI suppose that that's not far away.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1673109.1610733352%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 17 Jul 2022 23:43:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Mon, Jul 18, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > With my garbage collector hat on, I see that all systems we target\n> > have fdatasync(), except:\n>\n> > 1. Windows, but this patch supplies src/port/fdatasync.c.\n> > 2. DragonflyBSD before 6.1. We have 6.0 in the build farm.\n> > 3. Ancient macOS. Current releases have it, though we have to cope\n> > with a missing declaration.\n>\n> Hmmm ... according to [1], while current macOS has an undocumented\n> fdatasync function, it doesn't seem to do anything as useful as,\n> say, sync data to disk. I'm not sure what direction you're headed\n> in here, but it probably shouldn't include assuming that fdatasync\n> is actually useful on macOS. But maybe that's not your point?\n\nOh, I'm not planning to change the default choice on macOS (or\nWindows). I *am* assuming we're not going to take it away as an\noption, that cat being out of the bag ever since configure found\nApple's secret fdatasync (note that O_DSYNC, our default, is also\nundocumented and also known not to flush caches, but at least it's\npresent in an Apple header!). I was just noting an upcoming\nopportunity to remove the configure/meson probes for fdatasync, which\nmade me feel better about the slightly kludgy way this patch is\ndefining HAVE_FDATASYNC explicitly on Windows.\n\n> > My plan now is to commit this patch so that problem #1 is solved, prod\n> > conchuela's owner to upgrade to solve #2, and wait until Tom shuts\n> > down prairiedog to solve #3.\n>\n> You could force my hand by pushing something that requires this ;-).\n\nHeh. Let me ask about the DragonFlyBSD thing first.\n\n\n", "msg_date": "Mon, 18 Jul 2022 16:20:23 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... I was just noting an upcoming\n> opportunity to remove the configure/meson probes for fdatasync, which\n> made me feel better about the slightly kludgy way this patch is\n> defining HAVE_FDATASYNC explicitly on Windows.\n\nHm. There is certainly not any harm in the meson infrastructure\nskipping that test, because prairiedog is not able to run meson\nanyway. Can we do that and still leave it in place on the autoconf\nside? Maybe not, because I suppose you want to remove #ifdefs in\nthe code itself.\n\nI see that fdatasync goes back as far as SUS v2, which we've long\ntaken as our minimum POSIX infrastructure. So there's not a lot\nof room to insist that we should support allegedly-Unix platforms\nwithout it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jul 2022 00:33:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Mon, Jul 18, 2022 at 03:26:36PM +1200, Thomas Munro wrote:\n> My plan now is to commit this patch so that problem #1 is solved, prod\n> conchuela's owner to upgrade to solve #2, and wait until Tom shuts\n> down prairiedog to solve #3. Then we could consider removing the\n> HAVE_FDATASYNC probe and associated #ifdefs when convenient. For that\n> reason, I'm not too bothered about the slight weirdness of defining\n> HAVE_FDATASYNC on Windows even though that doesn't come from\n> configure; it'd hopefully be short-lived. Better ideas welcome,\n> though. Does that make sense?\n\nDo you still need HAVE_DECL_FDATASYNC? \n--\nMichael", "msg_date": "Tue, 19 Jul 2022 13:54:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Tue, Jul 19, 2022 at 4:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Do you still need HAVE_DECL_FDATASYNC?\n\nI guess so, because that is currently used for macOS, and with this\npatch would also be used to control the declaration for Windows. The\nalternative would be to explicitly test for WIN32 or __darwin__.\n\nThe reason we need it for macOS is that they have had fdatasync\nfunction for many years now, and configure detects it, but they\nhaven't ever declared it in a header, so we (accidentally?) do it in\nc.h. We didn't set that up for Apple! The commit that added it was\n33cc5d8a, which was about a month before Apple shipped the first\nversion of OS X (and long before they defined the function). So there\nmust have been another Unix with that problem, lost in the mists of\ntime.\n\n\n", "msg_date": "Tue, 19 Jul 2022 17:45:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The reason we need it for macOS is that they have had fdatasync\n> function for many years now, and configure detects it, but they\n> haven't ever declared it in a header, so we (accidentally?) do it in\n> c.h. We didn't set that up for Apple! The commit that added it was\n> 33cc5d8a, which was about a month before Apple shipped the first\n> version of OS X (and long before they defined the function). So there\n> must have been another Unix with that problem, lost in the mists of\n> time.\n\nIt might have just been paranoia, but I doubt it. Back then we\nwere still dealing with lots of systems that didn't have every\nfunction described in SUS v2.\n\nIf you poked around in the mail archives you could likely find some\nassociated discussion, but I'm too lazy for that ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jul 2022 09:36:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Ok, I've pushed the Windows patch. I'll watch the build farm to see\nif I've broken any of the frankentoolchain Windows animals.\n\nMikael kindly upgraded conchuela, so that leaves just prairiedog\nwithout fdatasync. I've attached a patch to drop the configure probe\nfor that once prairiedog's host is reassigned to new duties, if we're\nagreed on that.\n\nWhile in this part of the code I noticed another anachronism that\ncould be cleaned up: our handling of the old pre-standard BSD O_FSYNC\nflag. Pulling on that I noticed I could remove a bunch of associated\nmacrology.", "msg_date": "Wed, 20 Jul 2022 15:22:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Wed, 20 Jul 2022 at 15:22, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Ok, I've pushed the Windows patch. I'll watch the build farm to see\n> if I've broken any of the frankentoolchain Windows animals.\n\nJust to get in there before the farm does... I just got a boatload of\nredefinition of HAVE_FDATASYNC warnings. I see it already gets\ndefined in pg_config.h\n\nAll compiles cleanly with the attached.\n\nDavid", "msg_date": "Wed, 20 Jul 2022 16:08:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Wed, Jul 20, 2022 at 4:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 20 Jul 2022 at 15:22, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Ok, I've pushed the Windows patch. I'll watch the build farm to see\n> > if I've broken any of the frankentoolchain Windows animals.\n>\n> Just to get in there before the farm does... I just got a boatload of\n> redefinition of HAVE_FDATASYNC warnings. I see it already gets\n> defined in pg_config.h\n>\n> All compiles cleanly with the attached.\n\nOops. Thanks, pushed.\n\n\n", "msg_date": "Wed, 20 Jul 2022 16:14:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "On Wed, Jul 20, 2022 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jul 20, 2022 at 4:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Wed, 20 Jul 2022 at 15:22, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Ok, I've pushed the Windows patch. I'll watch the build farm to see\n> > > if I've broken any of the frankentoolchain Windows animals.\n> >\n> > Just to get in there before the farm does... I just got a boatload of\n> > redefinition of HAVE_FDATASYNC warnings. I see it already gets\n> > defined in pg_config.h\n> >\n> > All compiles cleanly with the attached.\n>\n> Oops. Thanks, pushed.\n\n... and here's a rebase of the patch that removes that macro stuff,\nsince cfbot is watching this thread.", "msg_date": "Wed, 20 Jul 2022 16:48:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "Hearing no objection, I committed the patch to remove O_FSYNC. The\nnext cleanup one I'll just leave here for now.", "msg_date": "Fri, 22 Jul 2022 13:37:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" }, { "msg_contents": "David kindly ran some tests of this thing on real hardware. The\nresults were mostly in line with expectations, but we learned some new\nthings.\n\nTL;DR We probably should consider this as a safer default, but it'd\nbe good for someone more hands-on with this OS and knowledgeable about\nstorage to investigate and propose that. My original goal here was\nprimarily Unix/Windows harmonisation and cleanup since I'm doing a\nbunch of hacking on I/O, but I can't unsee an\nunsafe-at-least-on-consumer-gear default now that I've seen it. The\nmain thing I'm aware of that we don't know yet is what happens if you\ntry it on a non-NTFS file system (ReFS? SMB?) -- hopefully it falls\nback to fsync behaviour.\n\nObservations from an old Windows 8.1 system with a SATA drive:\n\n1. So far you can apparently still actually compile and run on 8.1,\ndespite recent commits to de-support it.\n2. You can use the new wal_sync_method=fdatasync, without error, and\ntimings are consistent with falling back to full fsync behaviour.\nThat makes sense, I guess, because the function existed. It's just a\nnew flag bit, and the default behaviour for flags == 0 was already\ntheir fsync. That seems like a good outcome even though 8.1 isn't a\ntarget anymore.\n\nObservations from a current Windows 11 system with an NVMe drive:\n\n1. fdatasync is faster than fsync, as expected. Twice as fast with\nwrite cache disabled, a bit faster with write cache enabled.\n2. Timings seem to suggest that open_datasync (the current default)\nis not really writing through the drive cache. I'd previously thought\nthat was a SATA-only problem based on [1], which said that EIDE/SATA\ndrivers did not pass through the FUA flag that NTFS sends for\nFILE_FLAG_WRITE_THROUGH (= O_DSYNC) on the basis that many drives\nignored it anyway, but these numbers seem to suggest that David's\nrecent-ish NVMe system has the same problem as the old SATA system.\n\nGenerally, Windows' approach seems to be that NTFS\nFILE_FLAG_WRITE_THROUGH fires an FUA flag into the storage stack, and\neither the driver or the drive is free to fling it out the window, and\nit's the user's problem to worry about that, whereas Linux at least\nasks nicely if the drive understands FUA and falls back to flushing\nthe whole cache if not[2]. I also know that Linux has been flaky\naround this in the past too, especially on consumer storage, and macOS\nand at least some of the older BSD/UFS systems just don't do this\nstuff at all for user data (yet) so it's not like there is anything\nuniversal about this topic. Note that drive caches are enabled by\ndefault in Windows, and our manual does already tell you about this\nproblem[3].\n\nOne thing to note about the numbers below: pg_test_fsync.c's\nopen_datasync test is also using FILE_FLAG_NO_BUFFERING (= O_DIRECT),\nunlike PostgreSQL, which muddies the waters slightly. (There was a\npatch upthread to fix that and report both numbers, I may come back to\nthat.)\n\nWindows 11, NVMe, write cache enabled:\n\n open_datasync 27306.286 ops/sec 37 usecs/op\n fdatasync 3065.428 ops/sec 326 usecs/op\n fsync 2577.498 ops/sec 388 usecs/op\n\nWindows 11, NVMe, write cache disabled:\n\n open_datasync 3477.258 ops/sec 288 usecs/op\n fdatasync 3263.418 ops/sec 306 usecs/op\n fsync 1641.502 ops/sec 609 usecs/op\n\nWindows 8.1, SATA:\n\n open_datasync 19934.532 ops/sec 50 usecs/op\n fdatasync 231.429 ops/sec 4321 usecs/op\n fsync 240.050 ops/sec 4166 usecs/op\n\n(We couldn't figure out how to disable the write cache on the 8.1\nmachine -- the usual checkbox had no effect -- but we didn't waste\ntime investigating that old system beyond the curiosity of checking if\nit'd work at all.)\n\n[1] https://devblogs.microsoft.com/oldnewthing/20170510-00/?p=95505\n[2] https://techcommunity.microsoft.com/t5/sql-server-blog/sql-server-on-linux-forced-unit-access-fua-internals/ba-p/3199102\n[3] https://www.postgresql.org/docs/devel/wal-reliability.html\n\n\n", "msg_date": "Wed, 10 Aug 2022 13:37:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows now has fdatasync()" } ]
[ { "msg_contents": "I have seen this numerous times but had not dug into it, until now.\n\nIf pg_upgrade fails and is re-run, it appends to its logfiles, which is\nconfusing since, if it fails again, it then looks like the original error\nrecurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n\nI think it should either truncate the logfiles, or error early if any of the\nfiles exist. Or it could put all its output files into a newly-created\nsubdirectory. Or this message could be output to the per-db logfiles, and not\njust the static ones:\n| \"pg_upgrade run on %s\".\n\nFor the per-db logfiels with OIDs in their name, changing open() from \"append\"\nmode to truncate mode doesn't work, since they're written to in parallel.\nThey have to be removed/truncated in advance.\n\nThis is one possible fix. You can test its effect by deliberately breaking one\nof the calls to exec_progs(), like this.\n\n- \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verbose \"\n+ \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verboose \"", "msg_date": "Sat, 11 Dec 2021 20:50:17 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n> I have seen this numerous times but had not dug into it, until now.\n> \n> If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n> confusing since, if it fails again, it then looks like the original error\n> recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n> \n> I think it should either truncate the logfiles, or error early if any of the\n> files exist. Or it could put all its output files into a newly-created\n> subdirectory. Or this message could be output to the per-db logfiles, and not\n> just the static ones:\n> | \"pg_upgrade run on %s\".\n> \n> For the per-db logfiels with OIDs in their name, changing open() from \"append\"\n> mode to truncate mode doesn't work, since they're written to in parallel.\n> They have to be removed/truncated in advance.\n> \n> This is one possible fix. You can test its effect by deliberately breaking one\n> of the calls to exec_progs(), like this.\n> \n> - \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verbose \"\n> + \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verboose \"\n\nUh, the database server doesn't erase its logs on crash/failure, so why\nshould pg_upgrade do that?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:09:16 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Dec 15, 2021 at 04:09:16PM -0500, Bruce Momjian wrote:\n> On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n> > I have seen this numerous times but had not dug into it, until now.\n> > \n> > If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n> > confusing since, if it fails again, it then looks like the original error\n> > recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n> > \n> > I think it should either truncate the logfiles, or error early if any of the\n> > files exist. Or it could put all its output files into a newly-created\n> > subdirectory. Or this message could be output to the per-db logfiles, and not\n> > just the static ones:\n> > | \"pg_upgrade run on %s\".\n> > \n> > For the per-db logfiels with OIDs in their name, changing open() from \"append\"\n> > mode to truncate mode doesn't work, since they're written to in parallel.\n> > They have to be removed/truncated in advance.\n> > \n> > This is one possible fix. You can test its effect by deliberately breaking one\n> > of the calls to exec_progs(), like this.\n> > \n> > - \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verbose \"\n> > + \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verboose \"\n> \n> Uh, the database server doesn't erase its logs on crash/failure, so why\n> should pg_upgrade do that?\n\nTo avoid the presence of irrelevant errors from the previous invocation of\npg_upgrade.\n\nMaybe you would prefer one of my other ideas , like \"put all its output files\ninto a newly-created subdirectory\" ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Dec 2021 15:12:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n>> If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n>> confusing since, if it fails again, it then looks like the original error\n>> recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n\n> Uh, the database server doesn't erase its logs on crash/failure, so why\n> should pg_upgrade do that?\n\nThe server emits enough information so that it's not confusing:\nthere are timestamps, and there's an identifiable startup line.\npg_upgrade does neither. If you don't want to truncate as\nJustin suggests, you should do that instead.\n\nPersonally I like the idea of making a timestamped subdirectory\nand dropping all the files in that, because the thing that most\nannoys *me* about pg_upgrade is the litter it leaves behind in\n$CWD. A subdirectory would make it far easier to mop up the mess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:17:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n> >> If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n> >> confusing since, if it fails again, it then looks like the original error\n> >> recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n> \n> > Uh, the database server doesn't erase its logs on crash/failure, so why\n> > should pg_upgrade do that?\n> \n> The server emits enough information so that it's not confusing:\n> there are timestamps, and there's an identifiable startup line.\n> pg_upgrade does neither. If you don't want to truncate as\n> Justin suggests, you should do that instead.\n> \n> Personally I like the idea of making a timestamped subdirectory\n> and dropping all the files in that, because the thing that most\n> annoys *me* about pg_upgrade is the litter it leaves behind in\n> $CWD. A subdirectory would make it far easier to mop up the mess.\n\nYes, lot of litter. Putting it in a subdirectory makes a lot of sense.\nJustin, do you want to work on that patch, since you had an earlier\nversion to fix this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:23:43 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "\nOn 12/15/21 16:23, Bruce Momjian wrote:\n> On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n>>>> If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n>>>> confusing since, if it fails again, it then looks like the original error\n>>>> recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n>>> Uh, the database server doesn't erase its logs on crash/failure, so why\n>>> should pg_upgrade do that?\n>> The server emits enough information so that it's not confusing:\n>> there are timestamps, and there's an identifiable startup line.\n>> pg_upgrade does neither. If you don't want to truncate as\n>> Justin suggests, you should do that instead.\n>>\n>> Personally I like the idea of making a timestamped subdirectory\n>> and dropping all the files in that, because the thing that most\n>> annoys *me* about pg_upgrade is the litter it leaves behind in\n>> $CWD. A subdirectory would make it far easier to mop up the mess.\n> Yes, lot of litter. Putting it in a subdirectory makes a lot of sense.\n> Justin, do you want to work on that patch, since you had an earlier\n> version to fix this?\n>\n\n\n\nThe directory name needs to be predictable somehow, or maybe optionally\nset as a parameter. Having just a timestamped directory name would make\nlife annoying for a poor buildfarm maintainer. Also, please don't change\nanything before I have a chance to adjust the buildfarm code to what is\ngoing to be done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 17:04:54 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:\n> On 12/15/21 16:23, Bruce Momjian wrote:\n> > On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:\n> >> Bruce Momjian <bruce@momjian.us> writes:\n> >>> On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:\n> >>>> If pg_upgrade fails and is re-run, it appends to its logfiles, which is\n> >>>> confusing since, if it fails again, it then looks like the original error\n> >>>> recurred and wasn't fixed. The \"append\" behavior dates back to 717f6d608.\n> >>> Uh, the database server doesn't erase its logs on crash/failure, so why\n> >>> should pg_upgrade do that?\n> >> The server emits enough information so that it's not confusing:\n> >> there are timestamps, and there's an identifiable startup line.\n> >> pg_upgrade does neither. If you don't want to truncate as\n> >> Justin suggests, you should do that instead.\n> >>\n> >> Personally I like the idea of making a timestamped subdirectory\n> >> and dropping all the files in that, because the thing that most\n> >> annoys *me* about pg_upgrade is the litter it leaves behind in\n> >> $CWD. A subdirectory would make it far easier to mop up the mess.\n> > Yes, lot of litter. Putting it in a subdirectory makes a lot of sense.\n> > Justin, do you want to work on that patch, since you had an earlier\n> > version to fix this?\n> \n> The directory name needs to be predictable somehow, or maybe optionally\n> set as a parameter. Having just a timestamped directory name would make\n> life annoying for a poor buildfarm maintainer. Also, please don't change\n> anything before I have a chance to adjust the buildfarm code to what is\n> going to be done.\n\nFeel free to suggest the desirable behavior.\nIt could write to pg_upgrade.log/* and refuse to run if the dir already exists.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:13:10 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Dec 15, 2021 at 04:13:10PM -0600, Justin Pryzby wrote:\n> On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:\n>> The directory name needs to be predictable somehow, or maybe optionally\n>> set as a parameter. Having just a timestamped directory name would make\n>> life annoying for a poor buildfarm maintainer. Also, please don't change\n>> anything before I have a chance to adjust the buildfarm code to what is\n>> going to be done.\n> \n> Feel free to suggest the desirable behavior.\n> It could write to pg_upgrade.log/* and refuse to run if the dir already exists.\n\nAndrew's point looks rather sensible to me. So, this stuff should\nhave a predictable name (pg_upgrade.log, pg_upgrade_log or upgrade_log\nwould be fine). But I would also add an option to be able to define a\ncustom log path. The latter would be useful for the regression tests\nso as everything gets could get redirected to a path already filtered\nout.\n--\nMichael", "msg_date": "Thu, 16 Dec 2021 10:39:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On 16.12.21 02:39, Michael Paquier wrote:\n> On Wed, Dec 15, 2021 at 04:13:10PM -0600, Justin Pryzby wrote:\n>> On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:\n>>> The directory name needs to be predictable somehow, or maybe optionally\n>>> set as a parameter. Having just a timestamped directory name would make\n>>> life annoying for a poor buildfarm maintainer. Also, please don't change\n>>> anything before I have a chance to adjust the buildfarm code to what is\n>>> going to be done.\n>>\n>> Feel free to suggest the desirable behavior.\n>> It could write to pg_upgrade.log/* and refuse to run if the dir already exists.\n> \n> Andrew's point looks rather sensible to me. So, this stuff should\n> have a predictable name (pg_upgrade.log, pg_upgrade_log or upgrade_log\n> would be fine). But I would also add an option to be able to define a\n> custom log path. The latter would be useful for the regression tests\n> so as everything gets could get redirected to a path already filtered\n> out.\n\nCould we make it write just one log file? Is having multiple log files \nbetter?\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:11:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "> On 16 Dec 2021, at 12:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> Could we make it write just one log file? Is having multiple log files better?\n\nHaving individual <checkname>.txt files from checks with additional information\non how to handle the error are quite convenient when writing wrappers around\npg_upgrade (speaking from experience of having written multiple pg_upgraade\nfrontends). Parsing a single logfile is more work, and will break existing\nscripts.\n\nI'm in favor of a predictable by default logpath, with a parameter to override,\nas mentioned upthread.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:23:08 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Thu, Dec 16, 2021 at 12:23:08PM +0100, Daniel Gustafsson wrote:\n> > On 16 Dec 2021, at 12:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> > Could we make it write just one log file? Is having multiple log files better?\n> \n> Having individual <checkname>.txt files from checks with additional information\n> on how to handle the error are quite convenient when writing wrappers around\n> pg_upgrade (speaking from experience of having written multiple pg_upgraade\n> frontends). Parsing a single logfile is more work, and will break existing\n> scripts.\n> \n> I'm in favor of a predictable by default logpath, with a parameter to override,\n> as mentioned upthread.\n\nI put this together in the simplest way, prefixing all the filenames with the\nconfigured path..\n\nAnother options is to chdir() into the given path. But, pg_upgrade takes (and\nrequires) a bunch of other paths, like -d -D -b -B, and those are traditionally\ninterpretted relative to CWD. I could getcwd() and prefix all the -[dDbB] with\nthat, but prefixing a handful of binary/data paths is hardly better than\nprefixing a handful of dump/logfile paths. I suppose that openat() isn't\nportable. I don't think this it's worth prohibiting relative paths, so I can't\nthink of any less-naive way to do this.\n\nI didn't move the delete-old-cluster.sh, since that's intended to stay around\neven after a successful upgrade, as opposed to the other logs, which are\ntypically removed at that point.\n\n-- \nJustin", "msg_date": "Fri, 17 Dec 2021 11:21:13 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Fri, Dec 17, 2021 at 11:21:13AM -0600, Justin Pryzby wrote:\n> I put this together in the simplest way, prefixing all the filenames with the\n> configured path..\n\nWell, why not.\n\n> Another options is to chdir() into the given path. But, pg_upgrade takes (and\n> requires) a bunch of other paths, like -d -D -b -B, and those are traditionally\n> interpretted relative to CWD. I could getcwd() and prefix all the -[dDbB] with\n> that, but prefixing a handful of binary/data paths is hardly better than\n> prefixing a handful of dump/logfile paths. I suppose that openat() isn't\n> portable. I don't think this it's worth prohibiting relative paths, so I can't\n> think of any less-naive way to do this.\n\nIf we add a new file, .gitignore would find about it quickly and\ninform about a not-so-clean tree. I would tend to prefer your\napproach, here. Relative paths can be useful.\n\n> I didn't move the delete-old-cluster.sh, since that's intended to stay around\n> even after a successful upgrade, as opposed to the other logs, which are\n> typically removed at that point.\n\nMakes sense to me.\n\n+ log_opts.basedir = getenv(\"PG_UPGRADE_LOGDIR\");\n+ if (log_opts.basedir != NULL)\n+ log_opts.basedir = strdup(log_opts.basedir);\n+ else\n+ log_opts.basedir = \"pg_upgrade_log.d\";\nWhy is this controlled with an environment variable? It seems to me\nthat an option switch would be much better, no? While tuning things,\nwe could choose something simpler for the default, like\n\"pg_upgrade_log\". I don't have a good history in naming new things,\nthough :)\n\n.gitignore should be updated, I guess? Besides, this patch has no\ndocumentation.\n--\nMichael", "msg_date": "Mon, 20 Dec 2021 20:21:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Dec 20, 2021 at 08:21:51PM +0900, Michael Paquier wrote:\n> On Fri, Dec 17, 2021 at 11:21:13AM -0600, Justin Pryzby wrote:\n\n> + log_opts.basedir = \"pg_upgrade_log.d\";\n\n> we could choose something simpler for the default, like\n> \"pg_upgrade_log\". I don't have a good history in naming new things,\n> though :)\n\nI specifically called it .d to made it obvious that it's a dir - nearly\neverything that ends in \"log\" is a file, so people are likely to run \"rm\" and\n\"less\" on it - including myself.\n\n> .gitignore should be updated, I guess?\n\nAre you suggesting to remove these ?\n-/pg_upgrade_internal.log\n-/reindex_hash.sql\n-/loadable_libraries.txt\n\n> Besides, this patch has no documentation.\n\nTBH I'm not even sure if the dir needs to be configurable ?", "msg_date": "Mon, 20 Dec 2021 21:39:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:\n> On Mon, Dec 20, 2021 at 08:21:51PM +0900, Michael Paquier wrote:\n>> we could choose something simpler for the default, like\n>> \"pg_upgrade_log\". I don't have a good history in naming new things,\n>> though :)\n> \n> I specifically called it .d to made it obvious that it's a dir - nearly\n> everything that ends in \"log\" is a file, so people are likely to run \"rm\" and\n> \"less\" on it - including myself.\n\nOkay.\n\n>> .gitignore should be updated, I guess?\n> \n> Are you suggesting to remove these ?\n> -/pg_upgrade_internal.log\n> -/loadable_libraries.txt\n\nYep, it looks so as these are part of the logs, the second one being a\nfailure state.\n\n> -/reindex_hash.sql\n\nBut this one is not, no?\n\n>> Besides, this patch has no documentation.\n> \n> TBH I'm not even sure if the dir needs to be configurable ?\n\nI'd think it is better to have some control on that. Not sure what\nthe opinion of others is on this specific point, though.\n--\nMichael", "msg_date": "Wed, 22 Dec 2021 16:47:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:\n>> Are you suggesting to remove these ?\n>> -/pg_upgrade_internal.log\n>> -/loadable_libraries.txt\n\n> Yep, it looks so as these are part of the logs, the second one being a\n> failure state.\n\n>> -/reindex_hash.sql\n\n> But this one is not, no?\n\nI'd like to get to a state where there's just one thing to \"rm -rf\"\nto clean up after any pg_upgrade run. If we continue to leave the\nwe-suggest-you-run-these scripts loose in $CWD then we've not really\nimproved things much.\n\nPerhaps there'd be merit in putting log files into an additional\nsubdirectory of that output directory, like\npg_upgrade_output.d/logs/foo.log, so that the more-ignorable\noutput files would be separated from the less-ignorable ones.\nOr perhaps that's just gilding the lily.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Dec 2021 09:52:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Dec 22, 2021 at 09:52:26AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:\n> >> Are you suggesting to remove these ?\n> >> -/pg_upgrade_internal.log\n> >> -/loadable_libraries.txt\n> \n> > Yep, it looks so as these are part of the logs, the second one being a\n> > failure state.\n> \n> >> -/reindex_hash.sql\n> \n> > But this one is not, no?\n> \n> I'd like to get to a state where there's just one thing to \"rm -rf\"\n> to clean up after any pg_upgrade run. If we continue to leave the\n> we-suggest-you-run-these scripts loose in $CWD then we've not really\n> improved things much.\n\nMy patch moves reindex_hash.sql, and I'm having trouble seeing why it shouldn't\nbe handled in .gitignore the same way as other stuff that's moved.\n\nBut delete-old-cluster.sh is not moved, and I'm not sure how to improve on\nthat.\n\n> Perhaps there'd be merit in putting log files into an additional\n> subdirectory of that output directory, like\n> pg_upgrade_output.d/logs/foo.log, so that the more-ignorable\n> output files would be separated from the less-ignorable ones.\n> Or perhaps that's just gilding the lily.\n\nIn the case it's successful, everything is removed - except for the delete\nscript. I can see the case for separating the dumps (which are essentially\ninternal and of which there may be many) and the logs (same), from\nthe .txt error files like loadable_libraries.txt (which are user-facing).\n\nIt could also be divided with each DB having its own subdir, with a dumpfile\nand a logfile.\n\nShould the unix socket be created underneath the \"output dir\" ?\n\nShould it be possible to set the output dir to \".\" ? That would give the\npre-existing behavior, but only if we don't use subdirs for log/ and dump/.", "msg_date": "Wed, 22 Dec 2021 10:36:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "The cfbot was failing under windows:\n| [22:07:02.159] could not create directory \"pg_upgrade_output.d\": File exists\n\nIt's because parseCommandLine() was called before get_restricted_token(), which\nre-executes the process, and runs parseCommandLine again.\n\nparseCommandLine already does stuff like opening logfiles, so that's where my\nmkdir() is. It fails when re-run, since the re-exec doesn't call the cleanup()\npath.\n\nI fixed it by calling get_restricted_token() before parseCommandLine().\nThere's precedent for that in pg_regress (but the 3 other callers do it\ndifferently).\n\nIt seems more ideal to always call get_restricted_token sooner than later, but\nfor now I only changed pg_upgrade. It's probably also better if\nparseCommandLine() only parses the commandline, but for now I added on to the\nlogfile stuff that's already there.\n\nBTW the CI integration is pretty swell. I added a few lines of debugging code\nto figure out what was happening here. check world on 4 OSes is faster than\ncheck world run locally. I rearranged cirrus.yaml to make windows run its\nupgrade check first to save a few minutes.\n\nMaybe the commandline argument should be callled something other than \"logdir\"\nsince it also outputs dumps there. But the dumps are more or less not\nuser-facing. But -d and -o are already used. Maybe it shouldn't be\nconfigurable at all?\n\n-- \nJustin", "msg_date": "Sat, 8 Jan 2022 12:48:57 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sat, Jan 08, 2022 at 12:48:57PM -0600, Justin Pryzby wrote:\n> I fixed it by calling get_restricted_token() before parseCommandLine().\n> There's precedent for that in pg_regress (but the 3 other callers do it\n> differently).\n>\n> It seems more ideal to always call get_restricted_token sooner than later, but\n> for now I only changed pg_upgrade. It's probably also better if\n> parseCommandLine() only parses the commandline, but for now I added on to the\n> logfile stuff that's already there.\n> \nWell, the routine does a bit more than just parsing the options as it\ncreates the directory infrastructure as well. As you say, I think\nthat it would be better to have the option parsing and the\nloading-into-structure portions in one routine, and the creation of\nthe paths in a second one. So, the new contents of the patch could\njust be moved in a new routine, after getting the restricted token.\nMoving get_restricted_token() before or after the option parsing as\nyou do is not a big deal, but your patch is introducing in the\nexisting routine more than what's currently done there as of HEAD.\n\n> Maybe the commandline argument should be callled something other than \"logdir\"\n> since it also outputs dumps there. But the dumps are more or less not\n> user-facing. But -d and -o are already used. Maybe it shouldn't be\n> configurable at all?\n\nIf the choice of a short option becomes confusing, I'd be fine with\njust a long option, but -l is fine IMO. Including the internal dumps\nin the directory is fine to me, and using a subdir, as you do, makes\nthings more organized.\n\n- \"--binary-upgrade %s -f %s\",\n+ \"--binary-upgrade %s -f %s/dump/%s\",\nSome quotes seem to be missing here.\n\nstatic void\ncleanup(void)\n{\n+ int dbnum;\n+ char **filename;\n+ char filename_path[MAXPGPATH];\n[...] \n+ if (rmdir(filename_path))\n+ pg_log(PG_WARNING, \"failed to rmdir: %s: %m\\n\",\nfilename_path);\n+\n+ if (rmdir(log_opts.basedir))\n+ pg_log(PG_WARNING, \"failed to rmdir: %s: %m\\n\", log_opts.basedir);\n\nIs it intentional to not use rmtree() here? If you put all the data\nin the same directory, cleanup() gets simpler.\n--\nMichael", "msg_date": "Tue, 11 Jan 2022 16:41:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Tue, Jan 11, 2022 at 04:41:58PM +0900, Michael Paquier wrote:\n> On Sat, Jan 08, 2022 at 12:48:57PM -0600, Justin Pryzby wrote:\n> > I fixed it by calling get_restricted_token() before parseCommandLine().\n> > There's precedent for that in pg_regress (but the 3 other callers do it\n> > differently).\n> >\n> > It seems more ideal to always call get_restricted_token sooner than later, but\n> > for now I only changed pg_upgrade. It's probably also better if\n> > parseCommandLine() only parses the commandline, but for now I added on to the\n> > logfile stuff that's already there.\n>\n> Well, the routine does a bit more than just parsing the options as it\n> creates the directory infrastructure as well. As you say, I think\n> that it would be better to have the option parsing and the\n> loading-into-structure portions in one routine, and the creation of\n> the paths in a second one. So, the new contents of the patch could\n> just be moved in a new routine, after getting the restricted token.\n> Moving get_restricted_token() before or after the option parsing as\n> you do is not a big deal, but your patch is introducing in the\n> existing routine more than what's currently done there as of HEAD.\n\nI added mkdir() before the other stuff that messes with logfiles, because it\nneeds to happen before that.\n\nAre you suggesting to change the pre-existing behavior of when logfiles are\ncreated, like 0002 ?\n\n> > Maybe the commandline argument should be callled something other than \"logdir\"\n> > since it also outputs dumps there. But the dumps are more or less not\n> > user-facing. But -d and -o are already used. Maybe it shouldn't be\n> > configurable at all?\n> \n> If the choice of a short option becomes confusing, I'd be fine with\n> just a long option, but -l is fine IMO. Including the internal dumps\n> in the directory is fine to me, and using a subdir, as you do, makes\n> things more organized.\n> \n> - \"--binary-upgrade %s -f %s\",\n> + \"--binary-upgrade %s -f %s/dump/%s\",\n> Some quotes seem to be missing here.\n\nYes, good catch\n\n> Is it intentional to not use rmtree() here? If you put all the data\n> in the same directory, cleanup() gets simpler.\n\nThere's no reason not to. We created the dir, and the user didn't specify to\npreserve it. It'd be their fault if they put something valuable there after\nstarting pg_upgrade.\n\n-- \nJustin", "msg_date": "Tue, 11 Jan 2022 14:03:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Tue, Jan 11, 2022 at 02:03:07PM -0600, Justin Pryzby wrote:\n> I added mkdir() before the other stuff that messes with logfiles, because it\n> needs to happen before that.\n> \n> Are you suggesting to change the pre-existing behavior of when logfiles are\n> created, like 0002 ?\n\nYes, something like that.\n\n> There's no reason not to. We created the dir, and the user didn't specify to\n> preserve it. It'd be their fault if they put something valuable there after\n> starting pg_upgrade.\n\nThis is a path for the data internal to pg_upgrade. My take is that\nthe code simplifications the new option brings are more valuable than\nthis assumption, which I guess would unlikely happen. I may be wrong,\nof course. By the way, while thinking about that, should we worry\nabout --logdir=\".\"?\n--\nMichael", "msg_date": "Wed, 12 Jan 2022 12:59:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 12, 2022 at 12:59:54PM +0900, Michael Paquier wrote:\n> On Tue, Jan 11, 2022 at 02:03:07PM -0600, Justin Pryzby wrote:\n> > There's no reason not to. We created the dir, and the user didn't specify to\n> > preserve it. It'd be their fault if they put something valuable there after\n> > starting pg_upgrade.\n> \n> This is a path for the data internal to pg_upgrade. My take is that\n> the code simplifications the new option brings are more valuable than\n> this assumption, which I guess would unlikely happen. I may be wrong,\n> of course. By the way, while thinking about that, should we worry\n> about --logdir=\".\"?\n\nI asked about that before. Right now, it'll exit(1) when mkdir fails.\n\nI had written a patch to allow \".\" by skipping mkdir (or allowing it to fail if\nerrno == EEXIST), but it seems like an awfully bad idea to try to make that\nwork with rmtree().\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 11 Jan 2022 22:08:13 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Tue, Jan 11, 2022 at 10:08:13PM -0600, Justin Pryzby wrote:\n> I asked about that before. Right now, it'll exit(1) when mkdir fails.\n> \n> I had written a patch to allow \".\" by skipping mkdir (or allowing it to fail if\n> errno == EEXIST), but it seems like an awfully bad idea to try to make that\n> work with rmtree().\n\nSo, I have been poking at this patch, and found myself doing a couple\nof modifications:\n- Renaming of the option from --logdir to --outputdir, as this does\nnot include only logs. That matches also better with default value\nassigned in previous patches, aka pg_upgrade_output.d.\n- Convert the output directory to an absolute path when the various\ndirectories are created, and use that for the whole run. pg_upgrade\nis unlikely going to chdir(), but I don't really see why we should\njust not use an absolute path all the time, set from the start.\n- Add some sanity check about the path used, aka no parent reference\nallowed and the output path should not be a direct parent of the\ncurrent working directory.\n- Rather than assuming that \"log/\" and \"dump/\" are hardcoded in\nvarious places, save more paths into log_opts.\n\nI have noticed a couple of incorrect things in the docs, and some\nother things. It is a bit late here, so I may have missed a couple of\nthings but I'll look at this stuff once again in a couple of days.\n\nSo, what do you think?\n--\nMichael", "msg_date": "Wed, 19 Jan 2022 17:13:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 19, 2022 at 05:13:18PM +0900, Michael Paquier wrote:\n> I have noticed a couple of incorrect things in the docs, and some\n> other things. It is a bit late here, so I may have missed a couple of\n> things but I'll look at this stuff once again in a couple of days.\n\nAnd the docs failed to build..\n--\nMichael", "msg_date": "Wed, 19 Jan 2022 19:39:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 19, 2022 at 05:13:18PM +0900, Michael Paquier wrote:\n> On Tue, Jan 11, 2022 at 10:08:13PM -0600, Justin Pryzby wrote:\n> > I asked about that before. Right now, it'll exit(1) when mkdir fails.\n> > \n> > I had written a patch to allow \".\" by skipping mkdir (or allowing it to fail if\n> > errno == EEXIST), but it seems like an awfully bad idea to try to make that\n> > work with rmtree().\n\nI still don't know if it even needs to be configurable.\n\n> - Add some sanity check about the path used, aka no parent reference\n> allowed and the output path should not be a direct parent of the\n> current working directory.\n\nI'm not sure these restrictions are needed ?\n\n+ outputpath = make_absolute_path(log_opts.basedir); \n+ if (path_contains_parent_reference(outputpath)) \n+ pg_fatal(\"reference to parent directory not allowed\\n\"); \n\nBesides, you're passing the wrong path here.\n\n> I have noticed a couple of incorrect things in the docs, and some\n> other things. It is a bit late here, so I may have missed a couple of\n> things but I'll look at this stuff once again in a couple of days.\n\n> + <command>pg_upgrade</command>, and is be removed after a successful\n\nremove \"be\"\n\n> + if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))\n\nS_IRWXG | S_IRWXO are useless due to the umask, right ?\nMaybe use PG_DIR_MODE_OWNER ?\n\n> + if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))\n> + pg_fatal(\"could not create directory \\\"%s\\\": %m\\n\", filename_path);\n> + if (mkdir(log_opts.dumpdir, S_IRWXU | S_IRWXG | S_IRWXO))\n> + pg_fatal(\"could not create directory \\\"%s\\\": %m\\n\", filename_path);\n> + if (mkdir(log_opts.logdir, S_IRWXU | S_IRWXG | S_IRWXO))\n> + pg_fatal(\"could not create directory \\\"%s\\\": %m\\n\", filename_path);\n\nYou're printing the wrong var. filename_path is not initialized.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Jan 2022 18:05:40 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 19, 2022 at 06:05:40PM -0600, Justin Pryzby wrote:\n> I still don't know if it even needs to be configurable.\n\nI want this to be configurable to ease the switch of the pg_upgrade to\nTAP, moving the logs into a deterministic temporary location proper to\neach run. This makes reporting much easier on failure, with\nrepeatable tests, and that's why I began poking at this patch first.\n\n> I'm not sure these restrictions are needed ?\n\nThis could lead to issues with rmtree() if we are not careful enough,\nno? We'd had our deal of argument injections with pg_upgrade commands\nin the past (fcd15f1).\n\n> + outputpath = make_absolute_path(log_opts.basedir); \n> + if (path_contains_parent_reference(outputpath)) \n> + pg_fatal(\"reference to parent directory not allowed\\n\"); \n> \n> Besides, you're passing the wrong path here.\n\nWhat would you suggest? I was just looking at that again this\nmorning, and splitted the logic into two parts for the absolute and\nrelative path cases, preventing all cases like that, which would be\nweird, anyway:\n../\n../popo\n.././\n././\n/direct/path/to/cwd/\n/direct/path/../path/to/cwd/\n\n>> + <command>pg_upgrade</command>, and is be removed after a successful\n> \n> remove \"be\"\n\nFixed.\n\n>> + if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))\n> \n> S_IRWXG | S_IRWXO are useless due to the umask, right ?\n> Maybe use PG_DIR_MODE_OWNER ?\n\nHmm. We could just use pg_dir_create_mode, then. See pg_rewind, as\none example. This opens the door for something pluggable to\nSetDataDirectoryCreatePerm(), though the original use is kind of\ndifferent with data folders.\n\n> You're printing the wrong var. filename_path is not initialized.\n\nUgh.\n--\nMichael", "msg_date": "Thu, 20 Jan 2022 12:01:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Thu, Jan 20, 2022 at 12:01:29PM +0900, Michael Paquier wrote:\n> On Wed, Jan 19, 2022 at 06:05:40PM -0600, Justin Pryzby wrote:\n> \n> > I'm not sure these restrictions are needed ?\n> \n> This could lead to issues with rmtree() if we are not careful enough,\n> no? We'd had our deal of argument injections with pg_upgrade commands\n> in the past (fcd15f1).\n\nWe require that the dir not exist, by testing if (mkdir()).\nSo it's okay if someone specifies ../whatever or $CWD.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Jan 2022 21:59:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 19, 2022 at 09:59:14PM -0600, Justin Pryzby wrote:\n> We require that the dir not exist, by testing if (mkdir()).\n> So it's okay if someone specifies ../whatever or $CWD.\n\nWhat I am scared of here is the use of rmtree() if we allow something\nlike that. So we should either keep the removal code in its original\nshape and allow such cases, or restrict the output path. At the end,\nsomething has to change. My points are in favor of the latter because\nI don't really see anybody doing the former. You favor the former.\nNow, we are not talking about a lot of code for any of these, anyway.\nPerhaps we'd better wait for more opinions.\n--\nMichael", "msg_date": "Thu, 20 Jan 2022 13:38:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On 19.01.22 09:13, Michael Paquier wrote:\n> - Renaming of the option from --logdir to --outputdir, as this does\n> not include only logs. That matches also better with default value\n> assigned in previous patches, aka pg_upgrade_output.d.\n\nI'm afraid that is too easily confused with the target directory. \nGenerally, a tool processes data from input to output or from source to \ntarget or something like that, whereas a log is more clearly something \nseparate from this main processing stream. The desired \"output\" of \npg_upgrade is the upgraded cluster, after all.\n\nA wildcard idea is to put the log output into the target cluster.\n\n\n", "msg_date": "Thu, 20 Jan 2022 10:31:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Thu, Jan 20, 2022 at 10:31:15AM +0100, Peter Eisentraut wrote:\n> I'm afraid that is too easily confused with the target directory. Generally,\n> a tool processes data from input to output or from source to target or\n> something like that, whereas a log is more clearly something separate from\n> this main processing stream. The desired \"output\" of pg_upgrade is the\n> upgraded cluster, after all.\n> \n> A wildcard idea is to put the log output into the target cluster.\n\nNeat idea. That would work fine for my case. So I am fine to stick\nwith this suggestion. \n--\nMichael", "msg_date": "Thu, 20 Jan 2022 19:51:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:\n> Neat idea. That would work fine for my case. So I am fine to stick\n> with this suggestion. \n\nI have been looking at this idea, and the result is quite nice, being\nsimpler than anything that has been proposed on this thread yet. We\nget a simpler removal logic, and there is no need to perform any kind\nof sanity checks with the output path provided as long as we generate\nthe paths and the dirs after adjust_data_dir().\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 24 Jan 2022 10:59:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:\n> On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:\n> > Neat idea. That would work fine for my case. So I am fine to stick\n> > with this suggestion. \n> \n> I have been looking at this idea, and the result is quite nice, being\n> simpler than anything that has been proposed on this thread yet. We\n> get a simpler removal logic, and there is no need to perform any kind\n> of sanity checks with the output path provided as long as we generate\n> the paths and the dirs after adjust_data_dir().\n...\n> \n> <para>\n> <application>pg_upgrade</application> creates various working files, such\n> - as schema dumps, in the current working directory. For security, be sure\n> - that that directory is not readable or writable by any other users.\n> + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in\n> + the directory of the new cluster.\n> </para>\n\nUh, how are we instructing people to delete that pg_upgrade output\ndirectory? If pg_upgrade completes cleanly, would it be removed\nautomatically?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 12:39:30 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Jan 24, 2022 at 12:39:30PM -0500, Bruce Momjian wrote:\n> On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:\n> > On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:\n> > > Neat idea. That would work fine for my case. So I am fine to stick\n> > > with this suggestion. \n> > \n> > I have been looking at this idea, and the result is quite nice, being\n> > simpler than anything that has been proposed on this thread yet. We\n> > get a simpler removal logic, and there is no need to perform any kind\n> > of sanity checks with the output path provided as long as we generate\n> > the paths and the dirs after adjust_data_dir().\n> ...\n> > \n> > <para>\n> > <application>pg_upgrade</application> creates various working files, such\n> > - as schema dumps, in the current working directory. For security, be sure\n> > - that that directory is not readable or writable by any other users.\n> > + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in\n> > + the directory of the new cluster.\n> > </para>\n> \n> Uh, how are we instructing people to delete that pg_upgrade output\n> directory? If pg_upgrade completes cleanly, would it be removed\n> automatically?\n\nClearly.\n\n@@ -689,28 +751,5 @@ cleanup(void) \n \n /* Remove dump and log files? */ \n if (!log_opts.retain) \n- { \n- int dbnum; \n- char **filename; \n- \n- for (filename = output_files; *filename != NULL; filename++) \n- unlink(*filename); \n- \n- /* remove dump files */ \n- unlink(GLOBALS_DUMP_FILE); \n- \n- if (old_cluster.dbarr.dbs) \n- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++) \n- { \n- char sql_file_name[MAXPGPATH], \n- log_file_name[MAXPGPATH]; \n- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum]; \n- \n- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid); \n- unlink(sql_file_name); \n- \n- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid); \n- unlink(log_file_name); \n- } \n- } \n+ rmtree(log_opts.basedir, true); \n } \n\n\n", "msg_date": "Mon, 24 Jan 2022 11:41:17 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Jan 24, 2022 at 11:41:17AM -0600, Justin Pryzby wrote:\n> On Mon, Jan 24, 2022 at 12:39:30PM -0500, Bruce Momjian wrote:\n> > On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:\n> > > On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:\n> > > > Neat idea. That would work fine for my case. So I am fine to stick\n> > > > with this suggestion. \n> > > \n> > > I have been looking at this idea, and the result is quite nice, being\n> > > simpler than anything that has been proposed on this thread yet. We\n> > > get a simpler removal logic, and there is no need to perform any kind\n> > > of sanity checks with the output path provided as long as we generate\n> > > the paths and the dirs after adjust_data_dir().\n> > ...\n> > > \n> > > <para>\n> > > <application>pg_upgrade</application> creates various working files, such\n> > > - as schema dumps, in the current working directory. For security, be sure\n> > > - that that directory is not readable or writable by any other users.\n> > > + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in\n> > > + the directory of the new cluster.\n> > > </para>\n> > \n> > Uh, how are we instructing people to delete that pg_upgrade output\n> > directory? If pg_upgrade completes cleanly, would it be removed\n> > automatically?\n> \n> Clearly.\n\nOK, thanks. There are really two cleanups --- first, the \"log\"\ndirectory, and second deletion of the old cluster by running\ndelete_old_cluster.sh.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 14:44:21 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Jan 24, 2022 at 02:44:21PM -0500, Bruce Momjian wrote:\n> OK, thanks. There are really two cleanups --- first, the \"log\"\n> directory, and second deletion of the old cluster by running\n> delete_old_cluster.sh.\n\nYes, this is the same thing as what's done on HEAD with a two-step\ncleanup, except that we just only need to remove the log directory\nrather than each individual log entry.\n--\nMichael", "msg_date": "Tue, 25 Jan 2022 07:53:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:\n> On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:\n> > Neat idea. That would work fine for my case. So I am fine to stick\n> > with this suggestion. \n> \n> I have been looking at this idea, and the result is quite nice, being\n> simpler than anything that has been proposed on this thread yet. We\n> get a simpler removal logic, and there is no need to perform any kind\n> of sanity checks with the output path provided as long as we generate\n> the paths and the dirs after adjust_data_dir().\n> \n> Thoughts?\n\nAndrew: you wanted to accommodate any change on the build client, right ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 25 Jan 2022 10:45:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Tue, Jan 25, 2022 at 10:45:29AM -0600, Justin Pryzby wrote:\n> Andrew: you wanted to accommodate any change on the build client, right ?\n\nYes, this is going to need an adjustment of @logfiles in\nTestUpgrade.pm, with the addition of\n\"$tmp_data_dir/pg_update_output.d/log/*.log\" to be consistent with the\ndata fetched for the tests of older branches.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 09:44:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 26, 2022 at 09:44:48AM +0900, Michael Paquier wrote:\n> Yes, this is going to need an adjustment of @logfiles in\n> TestUpgrade.pm, with the addition of\n> \"$tmp_data_dir/pg_update_output.d/log/*.log\" to be consistent with the\n> data fetched for the tests of older branches.\n\nBleh. This would point to the old data directory, so this needs to be\n\"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log\"\nto point to the upgraded cluster.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 11:00:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Wed, Jan 26, 2022 at 11:00:28AM +0900, Michael Paquier wrote:\n> Bleh. This would point to the old data directory, so this needs to be\n> \"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log\"\n> to point to the upgraded cluster.\n\nPlease note that I have sent a patch to merge this change in the\nbuildfarm code. Comments are welcome.\n--\nMichael", "msg_date": "Fri, 28 Jan 2022 22:42:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "\nOn 1/28/22 08:42, Michael Paquier wrote:\n> On Wed, Jan 26, 2022 at 11:00:28AM +0900, Michael Paquier wrote:\n>> Bleh. This would point to the old data directory, so this needs to be\n>> \"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log\"\n>> to point to the upgraded cluster.\n> Please note that I have sent a patch to merge this change in the\n> buildfarm code. Comments are welcome.\n\n\n\nI have committed this. But it will take time to get every buildfarm own\nto upgrade. I will try to make a new release ASAP.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 18:27:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Fri, Jan 28, 2022 at 06:27:29PM -0500, Andrew Dunstan wrote:\n> I have committed this. But it will take time to get every buildfarm own\n> to upgrade.\n\nThanks for that.\n\n> I will try to make a new release ASAP.\n\nAnd thanks for that, as well.\n--\nMichael", "msg_date": "Sat, 29 Jan 2022 09:53:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sat, Jan 29, 2022 at 09:53:25AM +0900, Michael Paquier wrote:\n> On Fri, Jan 28, 2022 at 06:27:29PM -0500, Andrew Dunstan wrote:\n>> I have committed this. But it will take time to get every buildfarm own\n>> to upgrade.\n> \n> Thanks for that.\n\nSo, it took me some time to get back to this thread, and looked at it\nfor the last couple of days... The buildfarm client v14 has been\nreleased on the 29th of January, which means that we are good to go.\n\nI have found one issue while reviewing things: the creation of the new\nsubdirectory and its contents should satisfy group permissions for the\nnew cluster's data folder, but we were not doing that properly as we\ncalled GetDataDirectoryCreatePerm() after make_outputdirs() so we\nmissed the proper values for create_mode and umask(). The rest looked\nfine, and I got a green CI run on my own repo. Hence, applied.\n\nI'll keep an eye on the buildfarm, in case.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 13:36:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Hi,\n\nOn Sun, Feb 06, 2022 at 01:36:07PM +0900, Michael Paquier wrote:\n> \n> The buildfarm client v14 has been\n> released on the 29th of January, which means that we are good to go.\n\nI didn't follow that thread closely, but if having the latest buildfarm client\nversion installed is a hard requirement this will likely be a problem. First,\nthere was no email to warn buildfarm owners that a new version is available,\nand even if there was I doubt that every owner would have updated it since.\nEspecially since this is the lunar new year period, so at least 2 buildfarm\nowners (me included) are on holidays since last week.\n\n\n", "msg_date": "Sun, 6 Feb 2022 14:03:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sun, Feb 06, 2022 at 02:03:44PM +0800, Julien Rouhaud wrote:\n> I didn't follow that thread closely, but if having the latest buildfarm client\n> version installed is a hard requirement this will likely be a problem. First,\n> there was no email to warn buildfarm owners that a new version is available,\n> and even if there was I doubt that every owner would have updated it since.\n> Especially since this is the lunar new year period, so at least 2 buildfarm\n> owners (me included) are on holidays since last week.\n\nThe buildfarm will still be able to work as it did so that's not a\nhard requirement per-se. The only thing changing is that we would not\nfind the logs in the event of a failure in the tests of pg_upgrade,\nand the buildfarm client is coded to never fail if it does not see\nlogs in some of the paths it looks at, it just holds the full history\nof the paths we have used across the ages.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 15:11:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So, it took me some time to get back to this thread, and looked at it\n> for the last couple of days... The buildfarm client v14 has been\n> released on the 29th of January, which means that we are good to go.\n\nAs already mentioned, there's been no notice to buildfarm owners ...\nso has Andrew actually made a release?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Feb 2022 01:58:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > So, it took me some time to get back to this thread, and looked at it\n> > for the last couple of days... The buildfarm client v14 has been\n> > released on the 29th of January, which means that we are good to go.\n> \n> As already mentioned, there's been no notice to buildfarm owners ...\n> so has Andrew actually made a release?\n\nThere's a v14 release on the github project ([1]) from 8 days ago, so it seems\nso.\n\n[1] https://github.com/PGBuildFarm/client-code/releases/tag/REL_14\n\n\n", "msg_date": "Sun, 6 Feb 2022 15:03:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:\n> As already mentioned, there's been no notice to buildfarm owners ...\n> so has Andrew actually made a release?\n\nThere has been one as of 8 days ago:\nhttps://github.com/PGBuildFarm/client-code/releases\n\nAnd I have just looked at that as point of reference.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 16:04:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:\n>> As already mentioned, there's been no notice to buildfarm owners ...\n>> so has Andrew actually made a release?\n\n> There has been one as of 8 days ago:\n> https://github.com/PGBuildFarm/client-code/releases\n\n[ scrapes buildfarm logs ... ]\n\nNot even Andrew's own buildfarm critters are using it, so\npermit me leave to doubt that he thinks it's fully baked.\n\nAndrew?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Feb 2022 02:17:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "\nOn 2/6/22 02:17, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:\n>>> As already mentioned, there's been no notice to buildfarm owners ...\n>>> so has Andrew actually made a release?\n>> There has been one as of 8 days ago:\n>> https://github.com/PGBuildFarm/client-code/releases\n> [ scrapes buildfarm logs ... ]\n>\n> Not even Andrew's own buildfarm critters are using it, so\n> permit me leave to doubt that he thinks it's fully baked.\n>\n> Andrew?\n>\n> \t\t\t\n\n\n*sigh* Sometimes I have a mind like a sieve. I prepped the release a few\ndays ago and meant to come back the next morning and send out emails\nannouncing it, as well as rolling it out to my animals, and got diverted\nso that didn't happen and it slipped my mind. I'll go and do those\nthings now.\n\nBut the commit really shouldn't have happened until we know that most\nbuildfarm owners have installed it. It should have waited wait not just\nfor the release but for widespread deployment. Otherwise we will just\nlose any logging for an error that might appear.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 6 Feb 2022 08:32:59 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:\n> *sigh* Sometimes I have a mind like a sieve. I prepped the release a few\n> days ago and meant to come back the next morning and send out emails\n> announcing it, as well as rolling it out to my animals, and got diverted\n> so that didn't happen and it slipped my mind. I'll go and do those\n> things now.\n\nThanks. I saw the release listed after a couple of days of\nhibernation, and that one week went by since, so I thought that the\ntiming was pretty good. I did not check the buildfarm members though,\nsorry about that.\n\n> But the commit really shouldn't have happened until we know that most\n> buildfarm owners have installed it. It should have waited wait not just\n> for the release but for widespread deployment. Otherwise we will just\n> lose any logging for an error that might appear.\n\nWould it be better if I just revert the change for now then and do it\nagain in one/two weeks? The buildfarm is green, so keeping things as \nthey are does not sound like a huge deal to me, either, for this\ncase.\n\nFWIW, I have already switched my own animal to use the newest\nbuildfarm client.\n--\nMichael", "msg_date": "Mon, 7 Feb 2022 09:33:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:\n>> But the commit really shouldn't have happened until we know that most\n>> buildfarm owners have installed it. It should have waited wait not just\n>> for the release but for widespread deployment. Otherwise we will just\n>> lose any logging for an error that might appear.\n\n> Would it be better if I just revert the change for now then and do it\n> again in one/two weeks?\n\nI don't see a need to revert it.\n\nI note, though, that there's still not been any email to the buildfarm\nowners list about this update.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Feb 2022 19:39:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "\n\n> On Feb 6, 2022, at 7:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Michael Paquier <michael@paquier.xyz> writes:\n>>> On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:\n>>> But the commit really shouldn't have happened until we know that most\n>>> buildfarm owners have installed it. It should have waited wait not just\n>>> for the release but for widespread deployment. Otherwise we will just\n>>> lose any logging for an error that might appear.\n> \n>> Would it be better if I just revert the change for now then and do it\n>> again in one/two weeks?\n> \n> I don't see a need to revert it.\n> \n> I note, though, that there's still not been any email to the buildfarm\n> owners list about this update.\n> \n> \n\nIt’s stuck in moderation \n\nCheers\n\nAndrew\n\n", "msg_date": "Sun, 6 Feb 2022 19:47:32 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "\nOn 2/6/22 19:39, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:\n>>> But the commit really shouldn't have happened until we know that most\n>>> buildfarm owners have installed it. It should have waited wait not just\n>>> for the release but for widespread deployment. Otherwise we will just\n>>> lose any logging for an error that might appear.\n>> Would it be better if I just revert the change for now then and do it\n>> again in one/two weeks?\n> I don't see a need to revert it.\n>\n> I note, though, that there's still not been any email to the buildfarm\n> owners list about this update.\n>\n> \t\t\t\n\n\nThe announcement was held up in list moderation for 20 hours or so.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:00:22 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" }, { "msg_contents": "On Mon, Feb 07, 2022 at 11:00:22AM -0500, Andrew Dunstan wrote:\n> \n> On 2/6/22 19:39, Tom Lane wrote:\n> >\n> > I note, though, that there's still not been any email to the buildfarm\n> > owners list about this update.\n> \n> The announcement was held up in list moderation for 20 hours or so.\n\nI've certainly experienced way more than that in the past. Are volunteers\nneeded?\n\n\n", "msg_date": "Tue, 8 Feb 2022 00:31:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade should truncate/remove its logs before running" } ]
[ { "msg_contents": "Hello -hackers!\n\nPlease have a look at the attached patch, which implements some \nstatistics for TOAST.\n\nThe idea (and patch) have been lurking here for quite a while now, so I \ndecided to dust it off, rebase it to HEAD and send it out for review today.\n\nA big shoutout to Georgios Kokolatos, who gave me a crash course in PG \nhacking, some very useful hints and valueable feedback early this year.\n\nI'd like to get some feedback about the general idea, approach, naming \netc. before refining this further.\n\nI'm not a C person and I s**k at git, so please be kind with me! ;-)\nAlso, I'm not subscribed here, so a CC would be much appreciated!\n\n\nWhy gather TOAST statistics?\n============================\nTOAST is transparent and opaque at the same time.\nWhilst we know that it's there and we know _that_ it works, we cannot \ngenerally tell _how well_ it works.\n\nWhat we can't answer (easily) are questions like e.g.\n- how many datums have been externalized?\n- how many datums have been compressed?\n- how often has a compression failed (resulted in no space saving)?\n- how effective is the compression algorithm used on a column?\n- how much time did the DB spend compressing/decompressing TOAST values?\n\nThe patch adds some functionality that will eventually be able to answer \nthese (and probably more) questions.\n\nCurrently, #1 - #4 can be answered based on the view contained in \n\"pg_stats_toast.sql\":\n\npostgres=# CREATE TABLE test (i int, lz4 text COMPRESSION lz4, std text);\npostgres=# INSERT INTO test SELECT \ni,repeat(md5(i::text),100),repeat(md5(i::text),100) FROM \ngenerate_series(0,100000) x(i);\npostgres=# SELECT * FROM pg_stat_toast WHERE schemaname = 'public';\n-[ RECORD 1 ]--------+----------\nschemaname | public\nreloid | 16829\nattnum | 2\nrelname | test\nattname | lz4\nexternalizations | 0\ncompressions | 100001\ncompressionsuccesses | 100001\ncompressionsizesum | 6299710\noriginalsizesum | 320403204\n-[ RECORD 2 ]--------+----------\nschemaname | public\nreloid | 16829\nattnum | 3\nrelname | test\nattname | std\nexternalizations | 0\ncompressions | 100001\ncompressionsuccesses | 100001\ncompressionsizesum | 8198819\noriginalsizesum | 320403204\n\n\nImplementation\n==============\nI added some callbacks in backend/access/table/toast_helper.c to \n\"pgstat_report_toast_activity\" in backend/postmaster/pgstat.c.\n\nThe latter (and the other additions there) are essentially 1:1 copies of \nthe function statistics.\n\nThose were the perfect template, as IMHO the TOAST activities (well, \nwhat we're interested in at least) are very much comparable to function \ncalls:\na) It doesn't really matter if the TOASTed data was committed, as \"the \ndamage is done\" (i.e. CPU cycles were used) anyway\nb) The information can (thus/best) be stored on DB level, no need to \ntouch the relation or attribute statistics\n\nI didn't find anything that could have been used as a hash key, so the\n PgStat_StatToastEntry\nuses the shiny new\n PgStat_BackendAttrIdentifier\n(containing relid Oid, attr int).\n\nFor persisting in the statsfile, I chose the identifier 'O' (as 'T' was \ntaken).\n\n\nWhat's working?\n===============\n- Gathering of TOAST externalization and compression events\n- collecting the sizes before and after compression\n- persisting in statsfile\n- not breaking \"make check\"\n- not crashing anything (afaict)\n\nWhat's missing (yet)?\n===============\n- proper definition of the \"pgstat_track_toast\" GUC\n- Gathering of times (for compression [and decompression?])\n- improve \"pg_stat_toast\" view and include it in the catalog\n- documentation (obviously)\n- proper naming (of e.g. the hash key type, functions, view columns etc.)\n- would it be necessary to implement overflow protection for the size & \ntime sums?\n\nThanks in advance & best regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Sun, 12 Dec 2021 17:20:58 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "[PATCH] pg_stat_toast" }, { "msg_contents": "Hi,\n\nOn 2021-12-12 17:20:58 +0100, Gunnar \"Nick\" Bluth wrote:\n> Please have a look at the attached patch, which implements some statistics\n> for TOAST.\n> \n> The idea (and patch) have been lurking here for quite a while now, so I\n> decided to dust it off, rebase it to HEAD and send it out for review today.\n> \n> A big shoutout to Georgios Kokolatos, who gave me a crash course in PG\n> hacking, some very useful hints and valueable feedback early this year.\n> \n> I'd like to get some feedback about the general idea, approach, naming etc.\n> before refining this further.\n\nI'm worried about the additional overhead this might impose. For some workload\nit'll substantially increase the amount of stats traffic. Have you tried to\nqualify the overheads? Both in stats size and in stats management overhead?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Dec 2021 13:52:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 12.12.21 um 22:52 schrieb Andres Freund:\n> Hi,\n> \n> On 2021-12-12 17:20:58 +0100, Gunnar \"Nick\" Bluth wrote:\n>> Please have a look at the attached patch, which implements some statistics\n>> for TOAST.\n>>\n>> The idea (and patch) have been lurking here for quite a while now, so I\n>> decided to dust it off, rebase it to HEAD and send it out for review today.\n>>\n>> A big shoutout to Georgios Kokolatos, who gave me a crash course in PG\n>> hacking, some very useful hints and valueable feedback early this year.\n>>\n>> I'd like to get some feedback about the general idea, approach, naming etc.\n>> before refining this further.\n> \n> I'm worried about the additional overhead this might impose. For some workload\n> it'll substantially increase the amount of stats traffic. Have you tried to\n> qualify the overheads? Both in stats size and in stats management overhead?\n\nI'd lie if I claimed so...\n\nRegarding stats size; it adds one PgStat_BackendToastEntry \n(PgStat_BackendAttrIdentifier + PgStat_ToastCounts, should be 56-64 \nbytes or something in that ballpark) per TOASTable attribute, I can't \nsee that make any system break sweat ;-)\n\nA quick run comparing 1.000.000 INSERTs (2 TOASTable columns each) with \nand without \"pgstat_track_toast\" resulted in 12792.882 ms vs. 12810.557 \nms. So at least the call overhead seems to be neglectible.\n\nObviously, this was really a quick run and doesn't reflect real life.\nI'll have the machine run some reasonable tests asap, also looking at \nstat size, of course!\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Mon, 13 Dec 2021 00:00:23 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Hi,\n\nOn 2021-12-13 00:00:23 +0100, Gunnar \"Nick\" Bluth wrote:\n> Regarding stats size; it adds one PgStat_BackendToastEntry\n> (PgStat_BackendAttrIdentifier + PgStat_ToastCounts, should be 56-64 bytes or\n> something in that ballpark) per TOASTable attribute, I can't see that make\n> any system break sweat ;-)\n\nThat's actually a lot. The problem is that all the stats data for a database\nis loaded into private memory for each connection to that database, and that\nthe stats collector regularly writes out all the stats data for a database.\n\n\n> A quick run comparing 1.000.000 INSERTs (2 TOASTable columns each) with and\n> without \"pgstat_track_toast\" resulted in 12792.882 ms vs. 12810.557 ms. So\n> at least the call overhead seems to be neglectible.\n\nYea, you'd probably need a few more tables and a few more connections for it\nto have a chance of mattering meaningfully.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Dec 2021 15:41:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 13.12.21 um 00:41 schrieb Andres Freund:\n> Hi,\n> \n> On 2021-12-13 00:00:23 +0100, Gunnar \"Nick\" Bluth wrote:\n>> Regarding stats size; it adds one PgStat_BackendToastEntry\n>> (PgStat_BackendAttrIdentifier + PgStat_ToastCounts, should be 56-64 bytes or\n>> something in that ballpark) per TOASTable attribute, I can't see that make\n>> any system break sweat ;-)\n> \n> That's actually a lot. The problem is that all the stats data for a database\n> is loaded into private memory for each connection to that database, and that\n> the stats collector regularly writes out all the stats data for a database.\n\nMy understanding is that the stats file is only pulled into the backend \nwhen the SQL functions (for the view) are used (see \n\"pgstat_fetch_stat_toastentry()\").\n\nOtherwise, a backend just initializes an empty hash, right?\n\nOf which I reduced the initial size from 512 to 32 for the below tests \n(I guess the \"truth\" lies somewhere in between here), along with making \nthe GUC parameter an actual GUC parameter and disabling the elog() calls \nI scattered all over the place ;-) for the v0.2 patch attached.\n\n>> A quick run comparing 1.000.000 INSERTs (2 TOASTable columns each) with and\n>> without \"pgstat_track_toast\" resulted in 12792.882 ms vs. 12810.557 ms. So\n>> at least the call overhead seems to be neglectible.\n> \n> Yea, you'd probably need a few more tables and a few more connections for it\n> to have a chance of mattering meaningfully.\n\nSo, I went ahead and\n* set up 2 clusters with \"track_toast\" off and on resp.\n* created 100 DBs\n * each with 100 tables\n * with one TOASTable column in each table\n * filling those with 32000 bytes of md5 garbage\n\nThese clusters sum up to ~ 2GB each, so differences should _start to_ \nshow up, I reckon.\n\n$ du -s testdb*\n2161208 testdb\n2163240 testdb_tracking\n\n$ du -s testdb*/pg_stat\n4448 testdb/pg_stat\n4856 testdb_tracking/pg_stat\n\nThe db_*.stat files are 42839 vs. 48767 bytes each (so confirmed, the \ndifferences do show).\n\n\nNo idea if this is telling us anything, tbth, but the \n/proc/<pid>/smaps_rollup for a backend serving one of these DBs look \nlike this (\"0 kB\" lines omitted):\n\ntrack_toast OFF\n===============\nRss: 12428 kB\nPss: 5122 kB\nPss_Anon: 1310 kB\nPss_File: 2014 kB\nPss_Shmem: 1797 kB\nShared_Clean: 5864 kB\nShared_Dirty: 3500 kB\nPrivate_Clean: 1088 kB\nPrivate_Dirty: 1976 kB\nReferenced: 11696 kB\nAnonymous: 2120 kB\n\ntrack_toast ON (view not called yet):\n=====================================\nRss: 12300 kB\nPss: 4883 kB\nPss_Anon: 1309 kB\nPss_File: 1888 kB\nPss_Shmem: 1685 kB\nShared_Clean: 6040 kB\nShared_Dirty: 3468 kB\nPrivate_Clean: 896 kB\nPrivate_Dirty: 1896 kB\nReferenced: 11572 kB\nAnonymous: 2116 kB\n\ntrack_toast ON (view called):\n=============================\nRss: 15408 kB\nPss: 7482 kB\nPss_Anon: 2083 kB\nPss_File: 2572 kB\nPss_Shmem: 2826 kB\nShared_Clean: 6616 kB\nShared_Dirty: 3532 kB\nPrivate_Clean: 1472 kB\nPrivate_Dirty: 3788 kB\nReferenced: 14704 kB\nAnonymous: 2884 kB\n\nThat backend used some memory for displaying the result too, of course...\n\nA backend with just two TOAST columns in one table (filled with \n1.000.001 rows) looks like this before and after calling the \n\"pg_stat_toast\" view:\nRss: 146208 kB\nPss: 116181 kB\nPss_Anon: 2050 kB\nPss_File: 2787 kB\nPss_Shmem: 111342 kB\nShared_Clean: 6636 kB\nShared_Dirty: 45928 kB\nPrivate_Clean: 1664 kB\nPrivate_Dirty: 91980 kB\nReferenced: 145532 kB\nAnonymous: 2844 kB\n\nRss: 147736 kB\nPss: 103296 kB\nPss_Anon: 2430 kB\nPss_File: 3147 kB\nPss_Shmem: 97718 kB\nShared_Clean: 6992 kB\nShared_Dirty: 74056 kB\nPrivate_Clean: 1984 kB\nPrivate_Dirty: 64704 kB\nReferenced: 147092 kB\nAnonymous: 3224 kB\n\nAfter creating 10.000 more tables (view shows 10.007 rows now), before \nand after calling \"TABLE pg_stat_toast\":\nRss: 13816 kB\nPss: 4898 kB\nPss_Anon: 1314 kB\nPss_File: 1755 kB\nPss_Shmem: 1829 kB\nShared_Clean: 5972 kB\nShared_Dirty: 5760 kB\nPrivate_Clean: 832 kB\nPrivate_Dirty: 1252 kB\nReferenced: 13088 kB\nAnonymous: 2124 kB\n\nRss: 126816 kB\nPss: 55213 kB\nPss_Anon: 5383 kB\nPss_File: 2615 kB\nPss_Shmem: 47215 kB\nShared_Clean: 6460 kB\nShared_Dirty: 113028 kB\nPrivate_Clean: 1600 kB\nPrivate_Dirty: 5728 kB\nReferenced: 126112 kB\nAnonymous: 6184 kB\n\n\nThat DB's stat-file is now 4.119.254 bytes (3.547.439 without track_toast).\n\nAfter VACUUM ANALYZE, the size goes up to 5.919.812 (5.348.768).\nThe \"100 tables\" DBs' go to 97.910 (91.868) bytes.\n\nIn total:\n$ du -s testdb*/pg_stat\n14508 testdb/pg_stat\n15472 testdb_tracking/pg_stat\n\n\nIMHO, this would be ok to at least enable temporarily (e.g. to find out \nif MAIN or EXTERNAL storage/LZ4 compression would be ok/better for some \ncolumns).\n\nAll the best,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 13 Dec 2021 14:21:11 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Dear Gunnar,\r\n\r\n> postgres=# CREATE TABLE test (i int, lz4 text COMPRESSION lz4, std text);\r\n> postgres=# INSERT INTO test SELECT\r\n> i,repeat(md5(i::text),100),repeat(md5(i::text),100) FROM\r\n> generate_series(0,100000) x(i);\r\n> postgres=# SELECT * FROM pg_stat_toast WHERE schemaname = 'public';\r\n> -[ RECORD 1 ]--------+----------\r\n> schemaname | public\r\n> reloid | 16829\r\n> attnum | 2\r\n> relname | test\r\n> attname | lz4\r\n> externalizations | 0\r\n> compressions | 100001\r\n> compressionsuccesses | 100001\r\n> compressionsizesum | 6299710\r\n> originalsizesum | 320403204\r\n> -[ RECORD 2 ]--------+----------\r\n> schemaname | public\r\n> reloid | 16829\r\n> attnum | 3\r\n> relname | test\r\n> attname | std\r\n> externalizations | 0\r\n> compressions | 100001\r\n> compressionsuccesses | 100001\r\n> compressionsizesum | 8198819\r\n> originalsizesum | 320403204\r\n\r\nI'm not sure about TOAST, but currently compressions are configurable:\r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=bbe0a81db69bd10bd166907c3701492a29aca294\r\n\r\nHow about adding a new attribute \"method\" to pg_stat_toast?\r\nToastAttrInfo *attr->tai_compression represents how compress the data,\r\nso I think it's easy to add.\r\nOr, is it not needed because pg_attr has information?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 20 Dec 2021 03:20:55 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 20.12.2021 um 04:20 schrieb kuroda.hayato@fujitsu.com:\n> Dear Gunnar,\n\nHi Kuroda-San!\n\n>> postgres=# CREATE TABLE test (i int, lz4 text COMPRESSION lz4, std text);\n>> postgres=# INSERT INTO test SELECT\n>> i,repeat(md5(i::text),100),repeat(md5(i::text),100) FROM\n>> generate_series(0,100000) x(i);\n>> postgres=# SELECT * FROM pg_stat_toast WHERE schemaname = 'public';\n>> -[ RECORD 1 ]--------+----------\n>> schemaname | public\n>> reloid | 16829\n>> attnum | 2\n>> relname | test\n>> attname | lz4\n>> externalizations | 0\n>> compressions | 100001\n>> compressionsuccesses | 100001\n>> compressionsizesum | 6299710\n>> originalsizesum | 320403204\n>> -[ RECORD 2 ]--------+----------\n>> schemaname | public\n>> reloid | 16829\n>> attnum | 3\n>> relname | test\n>> attname | std\n>> externalizations | 0\n>> compressions | 100001\n>> compressionsuccesses | 100001\n>> compressionsizesum | 8198819\n>> originalsizesum | 320403204\n> \n> I'm not sure about TOAST, but currently compressions are configurable:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=bbe0a81db69bd10bd166907c3701492a29aca294\n> \n> How about adding a new attribute \"method\" to pg_stat_toast?\n> ToastAttrInfo *attr->tai_compression represents how compress the data,\n> so I think it's easy to add.\n> Or, is it not needed because pg_attr has information?\n\nThat information could certainly be included in the view, grabbing the \ninformation from pg_attribute.attcompression. It probably should!\n\nI guess the next step will be to include that view in the catalog \nanyway, so I'll do that next.\n\nThx for the feedback!\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Mon, 20 Dec 2021 09:43:44 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 12.12.21 um 17:20 schrieb Gunnar \"Nick\" Bluth:\n> Hello -hackers!\n> \n> Please have a look at the attached patch, which implements some \n> statistics for TOAST.\n\nThe attached v0.3 includes\n* a proper GUC \"track_toast\" incl. postgresql.conf.sample line\n* gathering timing information\n* the system view \"pg_stat_toast\"\n * naming improvements more than welcome!\n * columns \"storagemethod\" and \"compressmethod\" added as per Hayato \nKuroda's suggestion\n* documentation (pointing out the potential impacts as per Andres \nFreund's reservations)\n\nAny hints on how to write meaningful tests would be much appreciated!\nI.e., will a test\n\nINSERTing some long text to a column raises the view counts and \n\"compressedsize\" is smaller than \"originalsize\"\n\nsuffice?!?\n\nCheers & best regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 20 Dec 2021 14:31:17 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "[PATCH] pg_stat_toast v0.3" }, { "msg_contents": "Dear Gunnar,\r\n\r\n> The attached v0.3 includes\r\n> * columns \"storagemethod\" and \"compressmethod\" added as per Hayato\r\n> Kuroda's suggestion\r\n\r\nI prefer your implementation that referring another system view.\r\n\r\n> * gathering timing information\r\n\r\nHere is a minor comment:\r\nI'm not sure when we should start to measure time.\r\nIf you want to record time spent for compressing, toast_compress_datum() should be\r\nsandwiched by INSTR_TIME_SET_CURRENT() and caclurate the time duration.\r\nCurrently time_spent is calcurated in the pgstat_report_toast_activity(),\r\nbut this have a systematic error.\r\nIf you want to record time spent for inserting/updating, however,\r\nI think we should start measuring at the upper layer, maybe heap_toast_insert_or_update().\r\n\r\n> Any hints on how to write meaningful tests would be much appreciated!\r\n> I.e., will a test\r\n\r\nI searched hints from PG sources, and I thought that modules/ subdirectory can be used.\r\nFollowing is the example:\r\nhttps://github.com/postgres/postgres/tree/master/src/test/modules/commit_ts\r\n\r\nI attached a patch to create a new test. Could you rewrite the sql and the output file?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Tue, 21 Dec 2021 12:51:21 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] pg_stat_toast v0.3" }, { "msg_contents": "Am 21.12.21 um 13:51 schrieb kuroda.hayato@fujitsu.com:\n> Dear Gunnar,\n\nHayato-san, all,\n\nthanks for the feedback!\n\n>> * gathering timing information\n> \n> Here is a minor comment:\n> I'm not sure when we should start to measure time.\n> If you want to record time spent for compressing, toast_compress_datum() should be\n> sandwiched by INSTR_TIME_SET_CURRENT() and caclurate the time duration.\n> Currently time_spent is calcurated in the pgstat_report_toast_activity(),\n> but this have a systematic error.\n> If you want to record time spent for inserting/updating, however,\n> I think we should start measuring at the upper layer, maybe heap_toast_insert_or_update().\n\nYes, both toast_compress_datum() and toast_save_datum() are sandwiched \nthe way you mentioned, as that's exactly what we want to measure (time \nspent doing compression and/or externalizing data).\n\nImplementation-wise, I (again) took \"track_functions\" as a template \nthere, assuming that jumping into pgstat_report_toast_activity() and \nonly then checking if \"track_toast = on\" isn't too costly (we call \npgstat_init_function_usage() and pgstat_end_function_usage() a _lot_).\n\nI did miss though that\n INSTR_TIME_SET_CURRENT(time_spent);\nshould be called right after entering pgstat_report_toast_activity(), as \nthat might need additional clock ticks for setting up the hash etc.\nThat's fixed now.\n\nWhat I can't assess is the cost of the unconditional call to \nINSTR_TIME_SET_CURRENT(start_time) in both toast_tuple_try_compression() \nand toast_tuple_externalize().\n\nWould it be wise (cheaper) to add a check like\n if (pgstat_track_toast)\nbefore querying the system clock?\n\n\n>> Any hints on how to write meaningful tests would be much appreciated!\n>> I.e., will a test\n> \n> I searched hints from PG sources, and I thought that modules/ subdirectory can be used.\n> Following is the example:\n> https://github.com/postgres/postgres/tree/master/src/test/modules/commit_ts\n> \n> I attached a patch to create a new test. Could you rewrite the sql and the output file?\n\nThanks for the kickstart ;-)\n\nSome tests (as meaningful as they may get, I'm afraid) are now in\nsrc/test/modules/track_toast/.\n\"make check-world\" executes them successfully, although only after I \nintroduced a \"SELECT pg_sleep(1);\" to them.\n\n\npg_stat_toast_v0.4.patch attached.\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 3 Jan 2022 16:52:03 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "Am 03.01.22 um 16:52 schrieb Gunnar \"Nick\" Bluth:\n\n> pg_stat_toast_v0.4.patch attached.\n\nAaaand I attached a former version of the patch file... sorry, I'm kind \nof struggling with all the squashing/rebasing...\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 3 Jan 2022 17:00:45 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "On Mon, Jan 03, 2022 at 05:00:45PM +0100, Gunnar \"Nick\" Bluth wrote:\n> Am 03.01.22 um 16:52 schrieb Gunnar \"Nick\" Bluth:\n> \n> > pg_stat_toast_v0.4.patch attached.\n\nNote that the cfbot says this fails under windows\n\nhttp://cfbot.cputube.org/gunnar-quotnickquot-bluth.html\n...\n[16:47:05.347] Could not determine contrib module type for track_toast\n[16:47:05.347] at src/tools/msvc/mkvcbuild.pl line 31.\n\n> Aaaand I attached a former version of the patch file... sorry, I'm kind of\n> struggling with all the squashing/rebasing...\n\nSoon you will think this is fun :)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 3 Jan 2022 10:50:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "Am 03.01.22 um 17:50 schrieb Justin Pryzby:\n> On Mon, Jan 03, 2022 at 05:00:45PM +0100, Gunnar \"Nick\" Bluth wrote:\n>> Am 03.01.22 um 16:52 schrieb Gunnar \"Nick\" Bluth:\n>>\n>>> pg_stat_toast_v0.4.patch attached.\n> \n> Note that the cfbot says this fails under windows\n\nThanks for the heads up!\n\n\n> \n> http://cfbot.cputube.org/gunnar-quotnickquot-bluth.html\n> ...\n> [16:47:05.347] Could not determine contrib module type for track_toast\n> [16:47:05.347] at src/tools/msvc/mkvcbuild.pl line 31.\n\nNot only Window$... as it turns out, one of the checks was pretty bogus. \nKicked that one and instead wrote two (hopefully) meaningful ones.\n\nAlso, I moved the tests to regress/, as they're not really for a module \nanyway.\n\nLet's see how this fares!\n\n>> Aaaand I attached a former version of the patch file... sorry, I'm kind of\n>> struggling with all the squashing/rebasing...\n> \n> Soon you will think this is fun :)\n\nAs long as you're happy with plain patches like the attached one, I may ;-)\n\nAll the best,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 3 Jan 2022 19:01:54 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n\n> Am 03.01.22 um 17:50 schrieb Justin Pryzby:\n\n> > Soon you will think this is fun :)\n> \n> As long as you're happy with plain patches like the attached one, I may ;-)\n\nWell, with a zero-byte patch, not very much ...\n\nBTW why do you number with a \"0.\" prefix? It could just be \"4\" and \"5\"\nand so on. There's no value in two-part version numbers for patches.\nAlso, may I suggest that \"git format-patch -vN\" with varying N is an\neasier way to generate patches to attach?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Mon, 3 Jan 2022 15:30:49 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "Am 03.01.22 um 19:30 schrieb Alvaro Herrera:\n> On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n> \n>> Am 03.01.22 um 17:50 schrieb Justin Pryzby:\n> \n>>> Soon you will think this is fun :)\n>>\n>> As long as you're happy with plain patches like the attached one, I may ;-)\n> \n> Well, with a zero-byte patch, not very much ...\n\nD'oh!!!!\n\n\n> BTW why do you number with a \"0.\" prefix? It could just be \"4\" and \"5\"\n> and so on. There's no value in two-part version numbers for patches.\n> Also, may I suggest that \"git format-patch -vN\" with varying N is an\n> easier way to generate patches to attach?\n\nNot when you have a metric ton of commits in the history... I'll \nhopefully find a way to start over soon :/\n\n9:38 $ git format-patch PGDG/master -v5 -o ..\n../v5-0001-ping-pong-of-thougths.patch\n../v5-0002-ping-pong-of-thougths.patch\n../v5-0003-adds-some-debugging-messages-in-toast_helper.c.patch\n../v5-0004-adds-some-groundwork-for-pg_stat_toast-to-pgstat..patch\n../v5-0005-fixes-wrong-type-for-pgstat_track_toast-GUC.patch\n../v5-0006-introduces-PgStat_BackendAttrIdentifier-OID-attr-.patch\n../v5-0007-implements-and-calls-pgstat_report_toast_activity.patch\n../v5-0008-Revert-adds-some-debugging-messages-in-toast_help.patch\n../v5-0009-adds-more-detail-to-logging.patch\n../v5-0010-adds-toastactivity-to-table-stats-and-many-helper.patch\n../v5-0011-fixes-missed-replacement-in-comment.patch\n../v5-0012-adds-SQL-support-functions.patch\n../v5-0013-Add-SQL-functions.patch\n../v5-0014-reset-to-HEAD.patch\n../v5-0015-makes-DEBUG2-messages-more-precise.patch\n../v5-0016-adds-timing-information-to-stats-and-view.patch\n../v5-0017-adds-a-basic-set-of-tests.patch\n../v5-0018-adds-a-basic-set-of-tests.patch\n../v5-0019-chooses-a-PGSTAT_TOAST_HASH_SIZE-of-64-changes-ha.patch\n../v5-0020-removes-whitespace-trash.patch\n../v5-0021-returns-to-PGDG-master-.gitignore.patch\n../v5-0022-pg_stat_toast_v0.5.patch\n../v5-0023-moves-tests-to-regress.patch\n\nBut alas! INT versioning it is from now on!\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 3 Jan 2022 19:42:55 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n\n> 9:38 $ git format-patch PGDG/master -v5 -o ..\n> ../v5-0001-ping-pong-of-thougths.patch\n> ../v5-0002-ping-pong-of-thougths.patch\n> ../v5-0003-adds-some-debugging-messages-in-toast_helper.c.patch\n> ...\n\nHmm, in such cases I would suggest to create a separate branch and then\n\"git merge --squash\" for submission. You can keep your development\nbranch separate, with other merges if you want.\n\nI've found this to be easier to manage, though I don't always follow\nthat workflow myself.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Investigación es lo que hago cuando no sé lo que estoy haciendo\"\n(Wernher von Braun)\n\n\n", "msg_date": "Mon, 3 Jan 2022 16:11:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v0.4" }, { "msg_contents": "Am 03.01.22 um 20:11 schrieb Alvaro Herrera:\n> On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n> \n>> 9:38 $ git format-patch PGDG/master -v5 -o ..\n>> ../v5-0001-ping-pong-of-thougths.patch\n>> ../v5-0002-ping-pong-of-thougths.patch\n>> ../v5-0003-adds-some-debugging-messages-in-toast_helper.c.patch\n>> ...\n> \n> Hmm, in such cases I would suggest to create a separate branch and then\n> \"git merge --squash\" for submission. You can keep your development\n> branch separate, with other merges if you want.\n> \n> I've found this to be easier to manage, though I don't always follow\n> that workflow myself.\n> \n\nUsing --stdout does help ;-)\n\nI wonder why \"track_toast.sql\" test fails on Windows (with \"ERROR: \ncompression method lz4 not supported\"), but \"compression.sql\" doesn't.\nAny hints?\n\nAnyway, I shamelessly copied \"wait_for_stats()\" from the \"stats.sql\" \nfile and the tests _should_ now work at least on the platforms with lz4.\n\nv6 attached!\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Mon, 3 Jan 2022 20:40:50 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "On Mon, Jan 03, 2022 at 08:40:50PM +0100, Gunnar \"Nick\" Bluth wrote:\n> I wonder why \"track_toast.sql\" test fails on Windows (with \"ERROR:\n> compression method lz4 not supported\"), but \"compression.sql\" doesn't.\n> Any hints?\n\nThe windows CI doesn't have LZ4, so the SQL command fails, but there's an\n\"alternate\" expected/compression_1.out so that's accepted. (The regression\ntests exercise many commands which fail, as expected, like creating an index on\nan index).\n\nIf you're going have an alternate file for the --without-lz4 case, then I think\nyou should put it into compression.sql. (But not if you needed an alternate\nfor something else, since we'd need 4 alternates, which is halfway to 8...).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 3 Jan 2022 13:56:09 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "> +pgstat_report_toast_activity(Oid relid, int attr,\n> +\t\t\t\t\t\t\tbool externalized,\n> +\t\t\t\t\t\t\tbool compressed,\n> +\t\t\t\t\t\t\tint32 old_size,\n> +\t\t\t\t\t\t\tint32 new_size,\n...\n> +\t\tif (new_size)\n> +\t\t{\n> +\t\t\thtabent->t_counts.t_size_orig+=old_size;\n> +\t\t\tif (new_size)\n> +\t\t\t{\n\nI guess one of these is supposed to say old_size?\n\n> +\t\t&pgstat_track_toast,\n> +\t\tfalse,\n> +\t\tNULL, NULL, NULL\n> +\t},\n> \t{\n\n> +CREATE TABLE toast_test (cola TEXT, colb TEXT COMPRESSION lz4, colc TEXT , cold TEXT, cole TEXT);\n\nIs there a reason this uses lz4 ?\nIf that's needed for stable results, I think you should use pglz, since that's\nwhat's guaranteed to exist. I imagine LZ4 won't be required any time soon,\nseeing as zlib has never been required.\n\n> + Be aware that this feature, depending on the amount of TOASTable columns in\n> + your databases, may significantly increase the size of the statistics files\n> + and the workload of the statistics collector. It is recommended to only\n> + temporarily activate this to assess the right compression and storage method\n> + for (a) column(s).\n\nsaying \"a column\" is fine\n\n> + <structfield>schemaname</structfield> <type>name</type>\n> + Attribute (column) number in the relation\n> + <structfield>relname</structfield> <type>name</type>\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>compressmethod</structfield> <type>char</type>\n> + </para>\n> + <para>\n> + Compression method of the attribute (empty means default)\n\nOne thing to keep in mind is that the current compression method is only used\nfor *new* data - old data can still use the old compression method. It\nprobably doesn't need to be said here, but maybe you can refer to the docs\nabout that in alter_table.\n\n> + Number of times the compression was successful (gained a size reduction)\n\nIt's more clear to say \"was reduced in size\"\n\n> +\t/* we assume this inits to all zeroes: */\n> +\tstatic const PgStat_ToastCounts all_zeroes;\n\nYou don't have to assume; static/global allocations are always zero unless\notherwise specified.\n\n\n", "msg_date": "Mon, 3 Jan 2022 15:03:11 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "Overall I think this is a good feature to have; assessing the need for\ncompression is important for tuning, so +1 for the goal of the patch.\n\nI didn't look into the patch carefully, but here are some minor\ncomments:\n\nOn 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n\n> @@ -229,7 +230,9 @@ toast_tuple_try_compression(ToastTupleContext *ttc, int attribute)\n> \tDatum\t *value = &ttc->ttc_values[attribute];\n> \tDatum\t\tnew_value;\n> \tToastAttrInfo *attr = &ttc->ttc_attr[attribute];\n> +\tinstr_time\tstart_time;\n> \n> +\tINSTR_TIME_SET_CURRENT(start_time);\n> \tnew_value = toast_compress_datum(*value, attr->tai_compression);\n> \n> \tif (DatumGetPointer(new_value) != NULL)\n\nDon't INSTR_TIME_SET_CURRENT unconditionally; in some systems it's an\nexpensive syscall. Find a way to only do it if the feature is enabled.\nThis also suggests that perhaps it'd be a good idea to allow this to be\nenabled for specific tables only, rather than system-wide. (Maybe in\norder for stats to be collected, the user should have to both set the\nGUC option *and* set a per-table option? Not sure.)\n\n> @@ -82,10 +82,12 @@ typedef enum StatMsgType\n> \tPGSTAT_MTYPE_DEADLOCK,\n> \tPGSTAT_MTYPE_CHECKSUMFAILURE,\n> \tPGSTAT_MTYPE_REPLSLOT,\n> +\tPGSTAT_MTYPE_CONNECTION,\n\nI think this new enum value doesn't belong in this patch.\n\n> +/* ----------\n> + * PgStat_ToastEntry\t\t\tPer-TOAST-column info in a MsgFuncstat\n> + * ----------\n\nNot in \"a MsgFuncstat\", right?\n\n> +-- function to wait for counters to advance\n> +create function wait_for_stats() returns void as $$\n\nI don't think we want a separate copy of wait_for_stats; see commit\nfe60b67250a3 and the discussion leading to it. Maybe you'll want to\nmove the test to stats.sql. I'm not sure what to say about relying on\nLZ4; maybe you'll want to leave that part out, and just verify in an\nLZ4-enabled build that some 'l' entry exists. BTW, don't we have any\ndecent way to turn that 'l' into a more reasonable, descriptive string?\n\nAlso, perhaps make the view-defining query turn an empty compression\nmethod into whatever the default is. Or even better, stats collection\nshould store the real compression method used rather than empty string,\nto avoid confusing things when some stats are collected, then the\ndefault is changed, then some more stats are collected.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"\n\n\n", "msg_date": "Mon, 3 Jan 2022 18:23:18 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "Am 03.01.22 um 22:03 schrieb Justin Pryzby:\n>> +pgstat_report_toast_activity(Oid relid, int attr,\n>> +\t\t\t\t\t\t\tbool externalized,\n>> +\t\t\t\t\t\t\tbool compressed,\n>> +\t\t\t\t\t\t\tint32 old_size,\n>> +\t\t\t\t\t\t\tint32 new_size,\n> ...\n>> +\t\tif (new_size)\n>> +\t\t{\n>> +\t\t\thtabent->t_counts.t_size_orig+=old_size;\n>> +\t\t\tif (new_size)\n>> +\t\t\t{\n> \n> I guess one of these is supposed to say old_size?\n\nDidn't make a difference, tbth, as they'd both be 0 or have a value. \nStreamlined the whole block now.\n\n\n>> +CREATE TABLE toast_test (cola TEXT, colb TEXT COMPRESSION lz4, colc TEXT , cold TEXT, cole TEXT);\n> \n> Is there a reason this uses lz4 ?\n\nI thought it might help later on, but alas! the LZ4 column mainly broke \nthings, so I removed it for the time being.\n\n> If that's needed for stable results, I think you should use pglz, since that's\n> what's guaranteed to exist. I imagine LZ4 won't be required any time soon,\n> seeing as zlib has never been required.\n\nYeah. It didn't prove anything whatsoever.\n\n>> + Be aware that this feature, depending on the amount of TOASTable columns in\n>> + your databases, may significantly increase the size of the statistics files\n>> + and the workload of the statistics collector. It is recommended to only\n>> + temporarily activate this to assess the right compression and storage method\n>> + for (a) column(s).\n> \n> saying \"a column\" is fine\n\nChanged.\n\n> \n>> + <structfield>schemaname</structfield> <type>name</type>\n>> + Attribute (column) number in the relation\n>> + <structfield>relname</structfield> <type>name</type>\n> \n>> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> + <structfield>compressmethod</structfield> <type>char</type>\n>> + </para>\n>> + <para>\n>> + Compression method of the attribute (empty means default)\n> \n> One thing to keep in mind is that the current compression method is only used\n> for *new* data - old data can still use the old compression method. It\n> probably doesn't need to be said here, but maybe you can refer to the docs\n> about that in alter_table.\n> \n>> + Number of times the compression was successful (gained a size reduction)\n> \n> It's more clear to say \"was reduced in size\"\n\nChanged the wording a bit, I guess it is clear enough now.\nThe question is if the column should be there at all, as it's simply \nfetched from pg_attribute...\n\n> \n>> +\t/* we assume this inits to all zeroes: */\n>> +\tstatic const PgStat_ToastCounts all_zeroes;\n> \n> You don't have to assume; static/global allocations are always zero unless\n> otherwise specified.\n\nCopy-pasta ;-)\nRemoved.\n\nThx for looking into this!\nPatch v7 will be in the next mail.\n\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Tue, 4 Jan 2022 12:11:23 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "Am 03.01.22 um 22:23 schrieb Alvaro Herrera:\n> Overall I think this is a good feature to have; assessing the need for\n> compression is important for tuning, so +1 for the goal of the patch.\n\nMuch appreciated!\n\n\n> I didn't look into the patch carefully, but here are some minor\n> comments:\n> \n> On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n> \n>> @@ -229,7 +230,9 @@ toast_tuple_try_compression(ToastTupleContext *ttc, int attribute)\n>> \tDatum\t *value = &ttc->ttc_values[attribute];\n>> \tDatum\t\tnew_value;\n>> \tToastAttrInfo *attr = &ttc->ttc_attr[attribute];\n>> +\tinstr_time\tstart_time;\n>> \n>> +\tINSTR_TIME_SET_CURRENT(start_time);\n>> \tnew_value = toast_compress_datum(*value, attr->tai_compression);\n>> \n>> \tif (DatumGetPointer(new_value) != NULL)\n> \n> Don't INSTR_TIME_SET_CURRENT unconditionally; in some systems it's an\n> expensive syscall. Find a way to only do it if the feature is enabled.\n\nYeah, I was worried about that (and asking if it would be required) already.\nAdding the check was easier than I expected, though I'm absolutely \nclueless if I did it right!\n\n#include \"pgstat.h\"\nextern PGDLLIMPORT bool pgstat_track_toast;\n\n\n> This also suggests that perhaps it'd be a good idea to allow this to be\n> enabled for specific tables only, rather than system-wide. (Maybe in\n> order for stats to be collected, the user should have to both set the\n> GUC option *and* set a per-table option? Not sure.)\n\nThat would of course be nice, but I seriously doubt the required \nadditional logic would be justified. The patch currently tampers with as \nfew internal structures as possible, and for good reason... ;-)\n\n>> @@ -82,10 +82,12 @@ typedef enum StatMsgType\n>> \tPGSTAT_MTYPE_DEADLOCK,\n>> \tPGSTAT_MTYPE_CHECKSUMFAILURE,\n>> \tPGSTAT_MTYPE_REPLSLOT,\n>> +\tPGSTAT_MTYPE_CONNECTION,\n> \n> I think this new enum value doesn't belong in this patch.\n\nYeah, did I mention I'm struggling with rebasing? ;-|\n\n\n>> +/* ----------\n>> + * PgStat_ToastEntry\t\t\tPer-TOAST-column info in a MsgFuncstat\n>> + * ----------\n> \n> Not in \"a MsgFuncstat\", right?\n\nObviously... fixed!\n\n> \n>> +-- function to wait for counters to advance\n>> +create function wait_for_stats() returns void as $$\n> \n> I don't think we want a separate copy of wait_for_stats; see commit\n> fe60b67250a3 and the discussion leading to it. Maybe you'll want to\n> move the test to stats.sql. I'm not sure what to say about relying on\n\nDid so.\n\n> LZ4; maybe you'll want to leave that part out, and just verify in an\n> LZ4-enabled build that some 'l' entry exists. BTW, don't we have any\n> decent way to turn that 'l' into a more reasonable, descriptive string?\n\n> Also, perhaps make the view-defining query turn an empty compression\n> method into whatever the default is.\n\nI'm not even sure that having it in there is useful at all. It's simply \nJOINed in from pg_attribute.\nWhich is where I'd see that \"make it look nicer\" change happening, tbth. ;-)\n\n> Or even better, stats collection\n> should store the real compression method used rather than empty string,\n> to avoid confusing things when some stats are collected, then the\n> default is changed, then some more stats are collected.\n\nI was thinking about that already, but came to the conclusion that it a) \nwould blow up the size of these statistics by quite a bit and b) would \nbe quite tricky to display in a useful way.\n\nI mean, the use case of track_toast is pretty limited anyway; you'll \nprobably turn this feature on with a specific column in mind, of which \nyou'll probably know which compression method is used at the time.\n\nThanks for the feedback!\nv7 attached.\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Tue, 4 Jan 2022 12:29:09 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v6" }, { "msg_contents": "Am 04.01.22 um 12:29 schrieb Gunnar \"Nick\" Bluth:\n> Am 03.01.22 um 22:23 schrieb Alvaro Herrera:\n>> Overall I think this is a good feature to have; assessing the need for\n>> compression is important for tuning, so +1 for the goal of the patch.\n> \n> Much appreciated!\n> \n> \n>> I didn't look into the patch carefully, but here are some minor\n>> comments:\n>>\n>> On 2022-Jan-03, Gunnar \"Nick\" Bluth wrote:\n>>\n>>> @@ -229,7 +230,9 @@ toast_tuple_try_compression(ToastTupleContext\n>>> *ttc, int attribute)\n>>>       Datum       *value = &ttc->ttc_values[attribute];\n>>>       Datum        new_value;\n>>>       ToastAttrInfo *attr = &ttc->ttc_attr[attribute];\n>>> +    instr_time    start_time;\n>>>   +    INSTR_TIME_SET_CURRENT(start_time);\n>>>       new_value = toast_compress_datum(*value, attr->tai_compression);\n>>>         if (DatumGetPointer(new_value) != NULL)\n>>\n>> Don't INSTR_TIME_SET_CURRENT unconditionally; in some systems it's an\n>> expensive syscall.  Find a way to only do it if the feature is enabled.\n> \n> Yeah, I was worried about that (and asking if it would be required)\n> already.\n> Adding the check was easier than I expected, though I'm absolutely\n> clueless if I did it right!\n> \n> #include \"pgstat.h\"\n> extern PGDLLIMPORT bool pgstat_track_toast;\n> \n\nAs it works and nobody objected, it seems to be the right way...\n\nv8 (applies cleanly to today's HEAD/master) attached.\n\nAny takers?\n\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Tue, 8 Mar 2022 19:32:03 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v8" }, { "msg_contents": "Hi,\n\nOn 2022-03-08 19:32:03 +0100, Gunnar \"Nick\" Bluth wrote:\n> v8 (applies cleanly to today's HEAD/master) attached.\n\nThis doesn't apply anymore, likely due to my recent pgstat changes - which\nyou'd need to adapt to...\n\nhttp://cfbot.cputube.org/patch_37_3457.log\n\nMarked as waiting on author.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:17:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v8" }, { "msg_contents": "Am 22.03.22 um 02:17 schrieb Andres Freund:\n> Hi,\n> \n> On 2022-03-08 19:32:03 +0100, Gunnar \"Nick\" Bluth wrote:\n>> v8 (applies cleanly to today's HEAD/master) attached.\n> \n> This doesn't apply anymore, likely due to my recent pgstat changes - which\n> you'd need to adapt to...\n\nNow, that's been quite an overhaul... kudos!\n\n\n> http://cfbot.cputube.org/patch_37_3457.log\n> \n> Marked as waiting on author.\n\nv9 attached.\n\nTBTH, I don't fully understand all the external/static stuff, but it\napplies to HEAD/master, compiles and passes all tests, so... ;-)\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Tue, 22 Mar 2022 12:23:04 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v9" }, { "msg_contents": "Am 22.03.22 um 12:23 schrieb Gunnar \"Nick\" Bluth:\n> Am 22.03.22 um 02:17 schrieb Andres Freund:\n>> Hi,\n>>\n>> On 2022-03-08 19:32:03 +0100, Gunnar \"Nick\" Bluth wrote:\n>>> v8 (applies cleanly to today's HEAD/master) attached.\n>>\n>> This doesn't apply anymore, likely due to my recent pgstat changes - which\n>> you'd need to adapt to...\n> \n> Now, that's been quite an overhaul... kudos!\n> \n> \n>> http://cfbot.cputube.org/patch_37_3457.log\n>>\n>> Marked as waiting on author.\n> \n> v9 attached.\n> \n> TBTH, I don't fully understand all the external/static stuff, but it\n> applies to HEAD/master, compiles and passes all tests, so... ;-)\n\nAnd v10 catches up to master once again.\n\nBest,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Thu, 31 Mar 2022 15:14:24 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v9" }, { "msg_contents": "Am 31.03.22 um 15:14 schrieb Gunnar \"Nick\" Bluth:\n> Am 22.03.22 um 12:23 schrieb Gunnar \"Nick\" Bluth:\n>> Am 22.03.22 um 02:17 schrieb Andres Freund:\n>>> Hi,\n>>>\n>>> On 2022-03-08 19:32:03 +0100, Gunnar \"Nick\" Bluth wrote:\n>>>> v8 (applies cleanly to today's HEAD/master) attached.\n>>>\n>>> This doesn't apply anymore, likely due to my recent pgstat changes - which\n>>> you'd need to adapt to...\n>>\n>> Now, that's been quite an overhaul... kudos!\n>>\n>>\n>>> http://cfbot.cputube.org/patch_37_3457.log\n>>>\n>>> Marked as waiting on author.\n>>\n>> v9 attached.\n>>\n>> TBTH, I don't fully understand all the external/static stuff, but it\n>> applies to HEAD/master, compiles and passes all tests, so... ;-)\n> \n> And v10 catches up to master once again.\n> \n> Best,\n\nThat was meant to say \"v10\", sorry!\n\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Thu, 31 Mar 2022 15:16:00 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast v10" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:16 AM Gunnar \"Nick\" Bluth\n<gunnar.bluth@pro-open.de> wrote:\n> That was meant to say \"v10\", sorry!\n\nHi,\n\n From my point of view, at least, it would be preferable if you'd stop\nchanging the subject line every time you post a new version.\n\nBased on the test results in\nhttp://postgr.es/m/42bfa680-7998-e7dc-b50e-480cdd986ffc@pro-open.de\nand the comments from Andres in\nhttps://www.postgresql.org/message-id/20211212234113.6rhmqxi5uzgipwx2%40alap3.anarazel.de\nmy judgement would be that, as things stand today, this patch has no\nchance of being accepted, due to overhead. Now, Andres is currently\nworking on an overhaul of the statistics collector and perhaps that\nwould reduce the overhead of something like this to an acceptable\nlevel. If it does, that would be great news; I just don't know whether\nthat's the case.\n\nAs far as the statistics themselves are concerned, I am somewhat\nskeptical about whether it's really worth adding code for this.\nAccording to the documentation, the purpose of the patch is to allow\nyou to assess choice of storage and compression method settings for a\ncolumn and is not intended to be enabled permanently. However, it\nseems to me that you could assess that pretty easily without this\npatch: just create a couple of different tables with different\nsettings, load up the same data via COPY into each one, and see what\nhappens. Now you might answer that with the patch you would get more\ndetailed and accurate statistics, and I think that's true, but it\ndoesn't really look like the additional level of detail would be\ncritical to have in order to make a proper assessment. You might also\nsay that creating multiple copies of the table and loading the data\nmultiple times would be expensive, and that's also true, but you don't\nreally need to load it all. A representative sample of 1GB or so would\nprobably suffice in most cases, and that doesn't seem likely to be a\nhuge load on the system.\n\nAlso, as we add more compression options, it's going to be hard to\nassess this sort of thing without trying stuff anyway. For example if\nyou can set the lz4 compression level, you're not going to know which\nlevel is actually going to work best without trying out a bunch of\nthem and seeing what happens. If we allow access to other sorts of\ncompression parameters like zstd's \"long\" option, similarly, if you\nreally care, you're going to have to try it.\n\nSo my feeling is that this feels like a lot of machinery and a lot of\nworst-case overhead to solve a problem that's really pretty easy to\nsolve without any new code at all, and therefore I'd be inclined to\nreject it. However, it's a well-known fact that sometimes my feelings\nabout things are pretty stupid, and this might be one of those times.\nIf so, I hope someone will enlighten me by telling me what I'm\nmissing.\n\nThanks,\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 12:17:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast v10" }, { "msg_contents": "Am 05.04.22 um 18:17 schrieb Robert Haas:\n> On Thu, Mar 31, 2022 at 9:16 AM Gunnar \"Nick\" Bluth\n> <gunnar.bluth@pro-open.de> wrote:\n>> That was meant to say \"v10\", sorry!\n> \n> Hi,\n\nHi Robert,\n\nand thx for looking at this.\n\n> From my point of view, at least, it would be preferable if you'd stop\n> changing the subject line every time you post a new version.\n\nTerribly sorry, I believed to do the right thing! I removed the \"suffix\"\nnow for good.\n\n> Based on the test results in\n> http://postgr.es/m/42bfa680-7998-e7dc-b50e-480cdd986ffc@pro-open.de\n> and the comments from Andres in\n> https://www.postgresql.org/message-id/20211212234113.6rhmqxi5uzgipwx2%40alap3.anarazel.de\n> my judgement would be that, as things stand today, this patch has no\n> chance of being accepted, due to overhead. Now, Andres is currently\n> working on an overhaul of the statistics collector and perhaps that\n> would reduce the overhead of something like this to an acceptable\n> level. If it does, that would be great news; I just don't know whether\n> that's the case.\n\nAFAICT, Andres' work is more about the structure (e.g.\n13619598f1080d7923454634a2570ca1bc0f2fec). Or I've missed something...?\n\nThe attached v11 incorporates the latest changes in the area, btw.\n\n\nAnyway, my (undisputed up to now!) understanding still is that only\nbackends _looking_ at these stats (so, e.g., accessing the pg_stat_toast\nview) actually read the data. So, the 10-15% more space used for pg_stat\nonly affect the stats collector and _some few_ backends.\n\nAnd those 10-15% were gathered with 10.000 tables containing *only*\nTOASTable attributes. So the actual percentage would probably go down\nquite a bit once you add some INTs or such.\nBack then, I was curious myself on the impact and just ran a few\nsyntetic tests quickly hacked together. I'll happily go ahead and run\nsome tests on real world schemas if that helps clarifying matters!\n\n> As far as the statistics themselves are concerned, I am somewhat\n> skeptical about whether it's really worth adding code for this.\n> According to the documentation, the purpose of the patch is to allow\n> you to assess choice of storage and compression method settings for a\n> column and is not intended to be enabled permanently. However, it\n\nTBTH, the wording there is probably a bit over-cautious. I very much\nrespect Andres and thus his reservations, and I know how careful the\nproject is about regressions of any kind (see below on some elobarations\non the latter).\nI alleviated the <note> part a bit for v11.\n\n> seems to me that you could assess that pretty easily without this\n> patch: just create a couple of different tables with different\n> settings, load up the same data via COPY into each one, and see what\n> happens. Now you might answer that with the patch you would get more\n> detailed and accurate statistics, and I think that's true, but it\n> doesn't really look like the additional level of detail would be\n> critical to have in order to make a proper assessment. You might also\n> say that creating multiple copies of the table and loading the data\n> multiple times would be expensive, and that's also true, but you don't\n> really need to load it all. A representative sample of 1GB or so would\n> probably suffice in most cases, and that doesn't seem likely to be a\n> huge load on the system.\n\nAt the end of the day, one could argue like you did there for almost all\n(non-attribute) stats. \"Why track function execution times? Just set up\na benchmark and call the function 1 mio times and you'll know how long\nit takes on average!\". \"Why track IO timings? Run a benchmark on your\nsystem and ...\" etc. pp.\n\nI maintain a couple of DBs that house TBs of TOASTable data (mainly XML\ncontaining encrypted payloads). In just a couple of columns per cluster.\nI'm completely clueless if TOAST compression makes a difference there.\nOr externalization.\nAnd I'm not allowed to copy that data anywhere outside production\nwithout unrolling a metric ton of red tape.\nGuess why I started writing this patch ;-)\n*I* would certainly leave the option on, just to get an idea of what's\nhappening...\n\n> Also, as we add more compression options, it's going to be hard to\n> assess this sort of thing without trying stuff anyway. For example if\n> you can set the lz4 compression level, you're not going to know which\n> level is actually going to work best without trying out a bunch of\n> them and seeing what happens. If we allow access to other sorts of\n> compression parameters like zstd's \"long\" option, similarly, if you\n> really care, you're going to have to try it.\n\nFunny that you mention it. When writing the first version, I was\nthinking about the LZ4 patch authors and was wondering how they\ntested/benchmarked all of it and why they didn't implement something\nlike this patch for their tests ;-)\n\nYes, you're gonna try it. And you're gonna measure it. Somehow.\nExternally, as things are now.\n\nWith pg_stat_toast, you'd get the byte-by-byte and - maybe even more\nimportant - ms-by-ms comparison of the different compression and\nexternalization strategies straight from the core of the DB.\nI'd fancy that!\n\nAnd if you get these stats by just flicking a switch (or leaving it on\npermanently...), you might start looking at the pg_stat_toast view from\ntime to time, maybe realizing that your DB server spent hours of CPU\ntime trying to compress data that's compressed already. Or of which you\n_know_ that it's only gonna be around for a couple of seconds...\n\nMind you, a *lot* of people out there aren't even aware that TOAST even\nexists. Granted, most probably just don't care... ;-)\n\n\nPlus: this would (potentially, one day) give us information we could\neventually incorporate into EXPLAIN [ANALYZE]. Like, \"estimated time for\n(un)compressing TOAST values\" or so.\n\n> So my feeling is that this feels like a lot of machinery and a lot of\n> worst-case overhead to solve a problem that's really pretty easy to\n> solve without any new code at all, and therefore I'd be inclined to\n> reject it. However, it's a well-known fact that sometimes my feelings\n> about things are pretty stupid, and this might be one of those times.\n> If so, I hope someone will enlighten me by telling me what I'm\n> missing.\n\nMost DBAs I met will *happily* donate a few CPU cycles (and MBs) to\ngather as much first hand information about their *live* systems.\n\nWhy is pg_stat_statements so popular? Even if it costs 5-10% CPU\ncycles...? If I encounter a tipped-over plan and have a load1 of 1200 on\nmy production server, running pgbadger on 80GB of (compressed) full\nstatement logs will just not give me the information I need _now_\n(actually, an hour ago). So I happily deploy pg_stat_statements\neverywhere, *hoping* that I'll never really need it...\n\nWhy is \"replica\" now the default WAL level? Because essentially\neverybody changed it anyway, _just in case_. People looking for the last\ncouple of % disk space will tune it down to \"minimal\", for everybody\nelse, the gain in *options* vastly outweighs the additional disk usage.\n\nWhy is everybody asking for live execution plans? Or a progress\nindication? The effort to get these is ridiculous from what I know,\nstill I'd fancy them a lot!\n\nOne of my clients is currently spending a lot of time (and thus $$$) to\nget some profiling software (forgot the name) for their DB2 to work (and\nnot push AIX into OOM situations, actually ;). And compared to\nPostgreSQL, I'm pretty sure you get a lot more insights from a stock DB2\nalready. As that's what customers ask for...\n\nIn essence: if *I* read in the docs \"this will give you useful\ninformation\" (and saves you effort for testing it in a seperate\nenvironment) \"but may use up some RAM and disk space for pg_stats\", I\nflick that switch on and probably leave it there.\n\nAnd in real world applications, you'd almost certainly never note a\ndifference (we're discussing ~ 50-60 bytes per attribute, afterall).\n\nI reckon most DBAs (and developers) would give this a spin and leave it\non, out of curiosity first and out of sheer convenience later.\nLike, if I run a DB relying heavily on stored procedures, I'll certainly\nenable \"track_functions\".\nNow show me the DB without any TOASTable attributes! ;-)\n\nTBTH, I imagine this to be a default \"on\" GUC parameter *eventually*,\nwhich some people with *very* special needs (and braindead schemas\ncausing the \"worst-case overhead\" you mention) turn \"off\". But alas!\nthat's not how we add features, is it?\n\nAlso, I wouldn't call ~ 583 LOC plus docs & tests \"a lot of machinery\" ;-).\n\nAgain, thanks a lot for peeking at this and\nbest regards,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato", "msg_date": "Wed, 6 Apr 2022 00:08:13 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Hi,\n\nOn 2022-04-06 00:08:13 +0200, Gunnar \"Nick\" Bluth wrote:\n> AFAICT, Andres' work is more about the structure (e.g.\n> 13619598f1080d7923454634a2570ca1bc0f2fec). Or I've missed something...?\n\nThat was just the prep work... I'm about to send slightly further polished\nversion, but here's the patchset from yesterday:\nhttps://www.postgresql.org/message-id/20220405030506.lfdhbu5zf4tzdpux%40alap3.anarazel.de\n\n> The attached v11 incorporates the latest changes in the area, btw.\n> \n> \n> Anyway, my (undisputed up to now!) understanding still is that only\n> backends _looking_ at these stats (so, e.g., accessing the pg_stat_toast\n> view) actually read the data. So, the 10-15% more space used for pg_stat\n> only affect the stats collector and _some few_ backends.\n\nIt's not so simple. That stats collector constantly writes these stats out to\ndisk. And disk bandwidth / space is of course a shared resource.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Apr 2022 19:34:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "On Tue, Apr 5, 2022 at 10:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > Anyway, my (undisputed up to now!) understanding still is that only\n> > backends _looking_ at these stats (so, e.g., accessing the pg_stat_toast\n> > view) actually read the data. So, the 10-15% more space used for pg_stat\n> > only affect the stats collector and _some few_ backends.\n>\n> It's not so simple. That stats collector constantly writes these stats out to\n> disk. And disk bandwidth / space is of course a shared resource.\n\nYeah, and just to make it clear, this really becomes an issue if you\nhave hundreds of thousands or even millions of tables. It's a lot of\nextra data to be writing, and in some cases we're rewriting it all,\nlike, once per second.\n\nNow if we're only incurring that overhead when this feature is\nenabled, then in fairness that problem is a lot less of an issue,\nespecially if this is also disabled by default. People who want the\ndata can get it and pay the cost, and others aren't much impacted.\nHowever, experience has taught me that a lot of skepticism is\nwarranted when it comes to claims about how cheap extensions to the\nstatistics system will be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:22:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "On 2022-Apr-06, Robert Haas wrote:\n\n> Now if we're only incurring that overhead when this feature is\n> enabled, then in fairness that problem is a lot less of an issue,\n> especially if this is also disabled by default. People who want the\n> data can get it and pay the cost, and others aren't much impacted.\n> However, experience has taught me that a lot of skepticism is\n> warranted when it comes to claims about how cheap extensions to the\n> statistics system will be.\n\nMaybe this feature should provide a way to be enabled for tables\nindividually, so you pay the overhead only where you need it and don't\nswamp the system with stats for uninteresting tables.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Wed, 6 Apr 2022 17:49:34 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "On Tue, Apr 5, 2022 at 6:08 PM Gunnar \"Nick\" Bluth\n<gunnar.bluth@pro-open.de> wrote:\n> At the end of the day, one could argue like you did there for almost all\n> (non-attribute) stats. \"Why track function execution times? Just set up\n> a benchmark and call the function 1 mio times and you'll know how long\n> it takes on average!\". \"Why track IO timings? Run a benchmark on your\n> system and ...\" etc. pp.\n>\n> I maintain a couple of DBs that house TBs of TOASTable data (mainly XML\n> containing encrypted payloads). In just a couple of columns per cluster.\n> I'm completely clueless if TOAST compression makes a difference there.\n> Or externalization.\n> And I'm not allowed to copy that data anywhere outside production\n> without unrolling a metric ton of red tape.\n> Guess why I started writing this patch ;-)\n> *I* would certainly leave the option on, just to get an idea of what's\n> happening...\n\nI feel like if you want to know whether externalization made a\ndifference, you can look at the size of the TOAST table. And by\nselecting directly from that table, you can even see how many chunks\nit contains, and how many are full-sized chunks vs. partial chunks,\nand stuff like that. And for compression, how about looking at\npg_column_size(col1) vs. pg_column_size(col1||'') or something like\nthat? You might get a 1-byte varlena header on the former and a 4-byte\nvarlena header on the latter even if there's no compression, but any\ngains beyond 3 bytes have to be due to compression.\n\n> Most DBAs I met will *happily* donate a few CPU cycles (and MBs) to\n> gather as much first hand information about their *live* systems.\n>\n> Why is pg_stat_statements so popular? Even if it costs 5-10% CPU\n> cycles...? If I encounter a tipped-over plan and have a load1 of 1200 on\n> my production server, running pgbadger on 80GB of (compressed) full\n> statement logs will just not give me the information I need _now_\n> (actually, an hour ago). So I happily deploy pg_stat_statements\n> everywhere, *hoping* that I'll never really need it...\n>\n> [ additional arguments ]\n\nI'm not trying to argue that instrumentation in the database is *in\ngeneral* useless. That would be kinda ridiculous, especially since\nI've spent time working on it myself.\n\nBut all cases are not the same. If you don't use something like\npg_stat_statements or auto_explain or log_min_duration_statement, you\ndon't have any good way of knowing which of your queries are slow and\nhow slow they are, and you really need some kind of instrumentation to\nhelp you figure that out. On the other hand, you CAN find out how\neffective compression is, at least in terms of space, without\ninstrumentation, because it leaves state on disk that you can examine\nwhenever you like. The stuff that the patch tells you about how much\n*time* was consumed is data you can't get after-the-fact, so maybe\nthere's enough value there to justify adding code to measure it. I'm\nnot entirely convinced, though, because I think that for most people\nin most situations doing trial loads and timing them will give\nsufficiently good information that they won't need anything else. I'm\nnot here to say that you must be wrong if you don't agree with me; I'm\njust saying what I think based on my own experience.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:49:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "On Wed, Apr 6, 2022 at 11:49 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Apr-06, Robert Haas wrote:\n> > Now if we're only incurring that overhead when this feature is\n> > enabled, then in fairness that problem is a lot less of an issue,\n> > especially if this is also disabled by default. People who want the\n> > data can get it and pay the cost, and others aren't much impacted.\n> > However, experience has taught me that a lot of skepticism is\n> > warranted when it comes to claims about how cheap extensions to the\n> > statistics system will be.\n>\n> Maybe this feature should provide a way to be enabled for tables\n> individually, so you pay the overhead only where you need it and don't\n> swamp the system with stats for uninteresting tables.\n\nMaybe. Or maybe once Andres finishes fixing the stats collector the\ncost goes down so much that it's just not a big issue any more. I'm\nnot sure. For me the first question is really around how useful this\ndata really is. I think we can take it as given that the data would be\nuseful to Gunnar, but I can't think of a situation when it would have\nbeen useful to me, so I'm curious what other people think (and why).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:52:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 06.04.22 um 17:22 schrieb Robert Haas:\n> On Tue, Apr 5, 2022 at 10:34 PM Andres Freund <andres@anarazel.de> wrote:\n>>> Anyway, my (undisputed up to now!) understanding still is that only\n>>> backends _looking_ at these stats (so, e.g., accessing the pg_stat_toast\n>>> view) actually read the data. So, the 10-15% more space used for pg_stat\n>>> only affect the stats collector and _some few_ backends.\n>>\n>> It's not so simple. That stats collector constantly writes these stats out to\n>> disk. And disk bandwidth / space is of course a shared resource.\n> \n> Yeah, and just to make it clear, this really becomes an issue if you\n> have hundreds of thousands or even millions of tables. It's a lot of\n> extra data to be writing, and in some cases we're rewriting it all,\n> like, once per second.\n\nFair enough. At that point, a lot of things become unexpectedly painful.\nHow many % of the installed base may that be though?\n\nI'm far from done reading the patch and mail thread Andres mentioned,\nbut I think the general idea is to move the stats to shared memory, so\nthat reading (and thus, persisting) pg_stats is required far less often,\nright?\n\n> Now if we're only incurring that overhead when this feature is\n> enabled, then in fairness that problem is a lot less of an issue,\n> especially if this is also disabled by default. People who want the\n> data can get it and pay the cost, and others aren't much impacted.\n\nThat's the idea, yes. I reckon folks with millions of tables will scan\nthrough the docs (and postgresql.conf) very thoroughly anyway. Hence the\nnote there.\n\n> However, experience has taught me that a lot of skepticism is\n> warranted when it comes to claims about how cheap extensions to the\n> statistics system will be.\n\nAgain, fair enough!\nMaybe we first need statistics about statistics collection and handling? ;-)\n\nBest,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Wed, 6 Apr 2022 18:01:26 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 06.04.22 um 17:49 schrieb Robert Haas:\n\n> I feel like if you want to know whether externalization made a\n> difference, you can look at the size of the TOAST table. And by\n> selecting directly from that table, you can even see how many chunks\n> it contains, and how many are full-sized chunks vs. partial chunks,\n> and stuff like that. And for compression, how about looking at\n> pg_column_size(col1) vs. pg_column_size(col1||'') or something like\n> that? You might get a 1-byte varlena header on the former and a 4-byte\n> varlena header on the latter even if there's no compression, but any\n> gains beyond 3 bytes have to be due to compression.\n\nI'll probably give that a shot!\n\nIt does feel a bit like noting your mileage on fuel receipts though, as\nI've done until I got my first decent car; works and will work perfectly\nwell up to the day, but certainly is a bit out-of-time (and requires\nsome basic math skills ;-).\n\nBest,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Wed, 6 Apr 2022 18:20:29 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "On Wed, Apr 6, 2022 at 12:01 PM Gunnar \"Nick\" Bluth\n<gunnar.bluth@pro-open.de> wrote:\n> Fair enough. At that point, a lot of things become unexpectedly painful.\n> How many % of the installed base may that be though?\n\nI don't have statistics on that, but it's large enough that the\nexpense associated with the statistics collector is a reasonably\nwell-known pain point, and for some users, a really severe one.\n\nAlso, if we went out and spun up a billion new PostgreSQL instances\nthat were completely idle and had no data in them, that would decrease\nthe percentage of the installed base with high table counts, but it\nwouldn't be an argument for or against this patch. The people who are\nusing PostgreSQL heavily are both more likely to have a lot of tables\nand also more likely to be interested in more obscure statistics. The\nquestion isn't - how likely is a random PostgreSQL installation to\nhave a lot of tables? - but rather - how likely is a PostgreSQL\ninstallation that cares about this feature to have a lot of tables? I\ndon't know either of those percentages but surely the second must be\nsignificantly higher than the first.\n\n> I'm far from done reading the patch and mail thread Andres mentioned,\n> but I think the general idea is to move the stats to shared memory, so\n> that reading (and thus, persisting) pg_stats is required far less often,\n> right?\n\nRight. I give Andres a lot of props for dealing with this mess,\nactually. Infrastructure work like this is a ton of work and hard to\nget right and you can always ask yourself whether the gains are really\nworth it, but your patch is not anywhere close to the first one where\nthe response has been \"but that would be too expensive!\". So we have\nto consider not only the direct benefit of that work in relieving the\npain of people with large database clusters, but also the indirect\nbenefits of maybe unblocking some other improvements that would be\nbeneficial. I'm fairly sure it's not going to make things so cheap\nthat we can afford to add all the statistics anybody wants, but it's\nso painful that even modest relief would be more than welcome.\n\n> > However, experience has taught me that a lot of skepticism is\n> > warranted when it comes to claims about how cheap extensions to the\n> > statistics system will be.\n>\n> Again, fair enough!\n> Maybe we first need statistics about statistics collection and handling? ;-)\n\nHeh.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 12:24:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Hi,\n\nOn 2022-04-06 12:24:20 -0400, Robert Haas wrote:\n> On Wed, Apr 6, 2022 at 12:01 PM Gunnar \"Nick\" Bluth\n> <gunnar.bluth@pro-open.de> wrote:\n> > Fair enough. At that point, a lot of things become unexpectedly painful.\n> > How many % of the installed base may that be though?\n> \n> I don't have statistics on that, but it's large enough that the\n> expense associated with the statistics collector is a reasonably\n> well-known pain point, and for some users, a really severe one.\n\nYea. I've seen well over 100MB/s of write IO solely due to stats files writes\non production systems, years ago.\n\n\n> I'm fairly sure it's not going to make things so cheap that we can afford to\n> add all the statistics anybody wants, but it's so painful that even modest\n> relief would be more than welcome.\n\nIt definitely doesn't make stats free. But I'm hopefull that avoiding the\nregular writing out / readin back in, and the ability to only optionally store\nsome stats (by varying allocation size or just having different kinds of\nstats), will reduce the cost sufficiently that we can start keeping more\nstats.\n\nWhich is not to say that these stats are the right ones (nor that they're the\nwrong ones).\n\n\nI think if I were to tackle providing more information about toasting, I'd\nstart not by adding a new stats view, but by adding a function to pgstattuple\nthat scans the relation and collects stats for each toasted column. An SRF\nreturning one row for each toastable column. With information like\n\n- column name\n- #inline datums\n- #compressed inline datums\n- sum(uncompressed inline datum size)\n- sum(compressed inline datum size)\n- #external datums\n- #compressed external datums\n- sum(uncompressed external datum size)\n- sum(compressed external datum size)\n\nIIRC this shouldn't require visiting the toast table itself.\n\n\nPerhaps also an SRF that returns information about each compression method\nseparately (i.e. collect above information, but split by compression method)?\nPerhaps even with the ability to measure how big the gains of recompressing\ninto another method would be?\n\n\n> > > However, experience has taught me that a lot of skepticism is\n> > > warranted when it comes to claims about how cheap extensions to the\n> > > statistics system will be.\n> >\n> > Again, fair enough!\n> > Maybe we first need statistics about statistics collection and handling? ;-)\n> \n> Heh.\n\nI've wondered about adding pg_stat_stats the other day, actually :)\nhttps://postgr.es/m/20220404193435.hf3vybaajlpfmbmt%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Apr 2022 09:55:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 06.04.22 um 18:55 schrieb Andres Freund:\n> Hi,\n> \n> On 2022-04-06 12:24:20 -0400, Robert Haas wrote:\n>> On Wed, Apr 6, 2022 at 12:01 PM Gunnar \"Nick\" Bluth\n>> <gunnar.bluth@pro-open.de> wrote:\n>>> Fair enough. At that point, a lot of things become unexpectedly painful.\n>>> How many % of the installed base may that be though?\n>>\n>> I don't have statistics on that, but it's large enough that the\n>> expense associated with the statistics collector is a reasonably\n>> well-known pain point, and for some users, a really severe one.\n> \n> Yea. I've seen well over 100MB/s of write IO solely due to stats files writes\n> on production systems, years ago.\n\nWow. Yeah, I tend to forget there's systems like ads' out there ;-)\n\n\n>> I'm fairly sure it's not going to make things so cheap that we can afford to\n>> add all the statistics anybody wants, but it's so painful that even modest\n>> relief would be more than welcome.\n> \n> It definitely doesn't make stats free. But I'm hopefull that avoiding the\n> regular writing out / readin back in, and the ability to only optionally store\n> some stats (by varying allocation size or just having different kinds of\n> stats), will reduce the cost sufficiently that we can start keeping more\n> stats.\n\nKnock on wood!\n\n\n> Which is not to say that these stats are the right ones (nor that they're the\n> wrong ones).\n\n;-)\n\n\n> I think if I were to tackle providing more information about toasting, I'd\n> start not by adding a new stats view, but by adding a function to pgstattuple\n> that scans the relation and collects stats for each toasted column. An SRF\n> returning one row for each toastable column. With information like\n> \n> - column name\n> - #inline datums\n> - #compressed inline datums\n> - sum(uncompressed inline datum size)\n> - sum(compressed inline datum size)\n> - #external datums\n> - #compressed external datums\n> - sum(uncompressed external datum size)\n> - sum(compressed external datum size)\n> \n> IIRC this shouldn't require visiting the toast table itself.\n\nBut it would still require a seqscan and quite some cycles. However,\nsure, something like that is an option.\n\n> Perhaps also an SRF that returns information about each compression method\n> separately (i.e. collect above information, but split by compression method)?\n> Perhaps even with the ability to measure how big the gains of recompressing\n> into another method would be?\n\nEven more of the above, but yeah, sound nifty.\n\n>>>> However, experience has taught me that a lot of skepticism is\n>>>> warranted when it comes to claims about how cheap extensions to the\n>>>> statistics system will be.\n>>>\n>>> Again, fair enough!\n>>> Maybe we first need statistics about statistics collection and handling? ;-)\n>>\n>> Heh.\n> \n> I've wondered about adding pg_stat_stats the other day, actually :)\n> https://postgr.es/m/20220404193435.hf3vybaajlpfmbmt%40alap3.anarazel.de\n\nOMG LOL!\n\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:20:39 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" }, { "msg_contents": "Am 06.04.22 um 17:49 schrieb Alvaro Herrera:\n> On 2022-Apr-06, Robert Haas wrote:\n> \n>> Now if we're only incurring that overhead when this feature is\n>> enabled, then in fairness that problem is a lot less of an issue,\n>> especially if this is also disabled by default. People who want the\n>> data can get it and pay the cost, and others aren't much impacted.\n>> However, experience has taught me that a lot of skepticism is\n>> warranted when it comes to claims about how cheap extensions to the\n>> statistics system will be.\n> \n> Maybe this feature should provide a way to be enabled for tables\n> individually, so you pay the overhead only where you need it and don't\n> swamp the system with stats for uninteresting tables.\n> \nThat would obviously be very nice (and Georgios pushed heavily in that\ndirection ;-).\n\nHowever, I intentionally bound those stats to the database level (see my\nvery first mail).\n\nThe changes to get them bound to attributes (or tables) would have\nrequired mangling with quite a lot of very elemental stuff, (e.g.\nattribute stats only get refreshed by ANALYZE and their structure would\nhave to be changed significantly, bloating them even if the feature is\ninactive).\n\nIt also would have made the stats updates synchronous (at TX end), would\nhave been \"blind\" for all TOAST efforts done by rolled back TXs etc.\n\nWhat you can do is of course (just like track_functions):\n ALTER DATABASE under_surveillance SET track_toast = [on|off];\n\nBest,\n-- \nGunnar \"Nick\" Bluth\n\nEimermacherweg 106\nD-48159 Münster\n\nMobil +49 172 8853339\nEmail: gunnar.bluth@pro-open.de\n__________________________________________________________________________\n\"Ceterum censeo SystemD esse delendam\" - Cato\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:43:22 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <gunnar.bluth@pro-open.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_toast" } ]
[ { "msg_contents": "Robert Haas has written on the subject of useless vacuuming, here:\n\nhttp://rhaas.blogspot.com/2020/02/useless-vacuuming.html\n\nI'm sure at least a few of us have thought about the problem at some\npoint. I would like to discuss how we can actually avoid useless\nvacuuming, and what our goals should be.\n\nI am currently working on decoupling advancing relfrozenxid from tuple\nfreezing [1]. That is, I'm teaching VACUUM to keep track of\ninformation that it uses to generate an \"optimal value\" for the\ntable's final relfrozenxid: the most recent XID value that might still\nbe in the table. This patch is based on the observation that we don't\nactually have to use the FreezeLimit cutoff for our new\npg_class.relfrozenxid. We need only obey the basic relfrozenxid\ninvariant, which is that the final value must be <= any extant XID in\nthe table. Using FreezeLimit is needlessly conservative.\n\nMy draft patch to implement the optimization (which builds on the\npatches already posted to [1]) will reliably set pg_class.relfrozenxid\nto the same VACUUM's precise original OldestXmin once certain\nconditions are met -- reasonably common conditions. For example, the\nsame precise OldestXmin XID is used for relfrozenxid in the event of a\nmanual VACUUM (without FREEZE) on a table that was just bulk-loaded,\nassuming the system is otherwise idle. Setting relfrozenxid to the\nprecise lowest safe value happens on a best-effort basis, without\nneedlessly tying that to things like when or how we freeze tuples.\n\nIt now occurs to me to push this patch in another direction, on top of\nall that: the OldestXmin behavior hints at a precise, robust way of\ndefining \"useless vacuuming\". We can condition skipping a VACUUM (i.e.\nwhether a VACUUM is considered \"definitely won't be useful if allowed\nto execute\") on whether or not our preexisting pg_class.relfrozenxid\nprecisely equals our newly-acquired OldestXmin for an about-to-begin\nVACUUM operation. (We'd also want to add an \"unchangeable\npg_class.relminmxid\" test, I think.)\n\nThis definition does seem to be close to ideal: We're virtually\nassured that there will be no more useful work for us, in a way that\nis grounded in theory but still quite practical. But it's not a slam\ndunk. A person could still argue that we shouldn't cancel the VACUUM\nbefore it has begun, even when all these conditions have been met.\nThis would not be a particularly strong argument, mind you, but it's\nstill worth taking seriously. We need an exact problem statement that\njustifies whatever definition of \"useless VACUUM\" we settle on.\n\nHere are arguments *against* the skipping behavior I sketched out:\n\n* An aborted transaction might need to be cleaned up, which should be\nable to go ahead despite the unchanged OldestXmin. (I think that this\nis the argument with the most merit, by quite a bit.)\n\n* In general index AMs may want to do deferred cleanup, say to place\npreviously deleted pages in the FSM. Although in practice the criteria\nfor recycling safety used by nbtree and GiST will make that\nimpossible, there is no fundamental reason why they need to work that\nway (XIDs are used, but only because they provide a conveniently\navailable notion of \"logical time\" that is sufficient to implement\nwhat Lanin & Shasha call \"the drain technique\"). Plus GIN really could\ndo real work in amvacuumcleanup, for the pending list. There are bound\nto be a handful of marginal things like this.\n\n* Who are we to intervene like this, anyway? (Makes much more sense if\nwe don't limit ourselves to autovacuum worker operations.)\n\nOffhand, I suspect that we should only consider skipping \"useless\"\nanti-wraparound autovacuums (not other kinds of autovacuums, not\nmanual VACUUMs). The arguments against skipping are weakest for the\nanti-wraparound case. And the arguments in favor are particularly\nstrong: we should specifically avoid starting a useless (and possibly\ntime-consuming) anti-wraparound autovacuum, because that could easily\nblock an actually-useful autovacuum launched some time later. We\nshould aim to be in a position to launch an anti-wraparound autovacuum\nthat can actually advance relfrozenxid as soon as that becomes\npossible (e.g. when the DBA drops an old replication slot that was\nholding back each VACUUM's OldestXmin). And so \"skipping\" makes us\nmuch more responsive, which seems like it might matter a lot in\npractice. It minimizes the risk of wraparound failure.\n\nThere is also a strong argument for logging our failure to clean up\nanything in any autovacuum -- we don't do nearly enough alerting when\nstuff like this happens (possibly because \"useless\" is such a squishy\nconcept right now?). Just logging something still requires defining\n\"useless VACUUM operation\" in a way that is both reliable and\nproportionate. So just logging something necessitates solving that\nhard problem.\n\n[1] https://commitfest.postgresql.org/36/3433/\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 12 Dec 2021 17:47:18 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Defining (and possibly skipping) useless VACUUM operations" }, { "msg_contents": "On Sun, Dec 12, 2021 at 8:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am currently working on decoupling advancing relfrozenxid from tuple\n> freezing [1]. That is, I'm teaching VACUUM to keep track of\n> information that it uses to generate an \"optimal value\" for the\n> table's final relfrozenxid: the most recent XID value that might still\n> be in the table. This patch is based on the observation that we don't\n> actually have to use the FreezeLimit cutoff for our new\n> pg_class.relfrozenxid. We need only obey the basic relfrozenxid\n> invariant, which is that the final value must be <= any extant XID in\n> the table. Using FreezeLimit is needlessly conservative.\n\nRight.\n\n> It now occurs to me to push this patch in another direction, on top of\n> all that: the OldestXmin behavior hints at a precise, robust way of\n> defining \"useless vacuuming\". We can condition skipping a VACUUM (i.e.\n> whether a VACUUM is considered \"definitely won't be useful if allowed\n> to execute\") on whether or not our preexisting pg_class.relfrozenxid\n> precisely equals our newly-acquired OldestXmin for an about-to-begin\n> VACUUM operation. (We'd also want to add an \"unchangeable\n> pg_class.relminmxid\" test, I think.)\n\nI think this is a reasonable line of thinking, but I think it's a\nlittle imprecise. In general, we could be vacuuming a relation to\nadvance relfrozenxid, but we could also be vacuuming a relation to\nadvance relminmxid, or we could be vacuuming a relation to fight\nbloat, or set pages all-visible. It is possible that there's no hope\nof advancing relfrozenxid but that we can still accomplish one of the\nother goals. In that case, the vacuuming is not useless. I think the\nplace to put logic around this would be in the triggering logic for\nautovacuum. If we're going to force a relation to be vacuumed because\nof (M)XID wraparound danger, we could first check whether there seems\nto be any hope of advancing relfrozenxid(minmxid). If not, we discount\nthat as a trigger for vacuum, but may still decide to vacuum if some\nother trigger warrants it. In most cases, if there's no hope of\nadvancing relfrozenxid, there won't be any bloat to remove either, but\naborted transactions are a counterexample. And the XID and MXID\nhorizons can advance at completely different rates.\n\nOne reason I haven't pursued this kind of optimization is that it\ndoesn't really feel like it's fixing the whole problem. It would be a\nlittle bit sad if we did a perfect job preventing useless vacuuming\nbut still allowed almost-useless vacuuming. Suppose we have a 1TB\nrelation and we trigger autovacuum. It cleans up a few things but\nrelfrozenxid is still old. On the next pass, we see that the\nsystem-wide xmin has not advanced, so we don't trigger autovacuum\nagain. Then on the pass after that we see that the system-wide xmin\nhas advanced by 1. Shall we trigger an autovacuum of the whole\nrelation now, to be able to do relfrozenxid++? Seems dubious.\n\nPart of the problem here, for both vacuuming-for-bloat and\nvacuuming-for-relfrozenxid-advancement, we would really like to know\nthe distribution of old XIDs in the table. If we knew that a lot of\nthe inserts, updates, and deletes that are causing us to vacuum for\nbloat containment were in a certain relatively narrow range, then we'd\nprobably want to not autovacuum for either purpose until the\nsystem-wide xmin has crossed through at least a good chunk of that\nrange. And it fully crossed over that range then an immediate vacuum\nlooks extremely appealing: we'll both remove a bunch of dead tuples\nand reclaim the associated line pointers, and at the same time we'll\nbe able to advance relfrozenxid. Nice! But we have no such\ninformation.\n\nSo I'm not certain of the way forward here. Just because we can't\nprevent almost-useless vacuuming is not a sufficient reason to\ncontinue allowing entirely-useless vacuuming that we can prevent. And\nit seems like we need a bunch of new bookkeeping to do any better than\nthat, which seems like a lot of work. So maybe it's the most practical\npath forward for the time being, but it feels like more of a\nspecial-purpose kludge than a truly high-quality solution.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Dec 2021 09:05:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defining (and possibly skipping) useless VACUUM operations" }, { "msg_contents": "On Tue, Dec 14, 2021 at 6:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think this is a reasonable line of thinking, but I think it's a\n> little imprecise. In general, we could be vacuuming a relation to\n> advance relfrozenxid, but we could also be vacuuming a relation to\n> advance relminmxid, or we could be vacuuming a relation to fight\n> bloat, or set pages all-visible. It is possible that there's no hope\n> of advancing relfrozenxid but that we can still accomplish one of the\n> other goals. In that case, the vacuuming is not useless. I think the\n> place to put logic around this would be in the triggering logic for\n> autovacuum. If we're going to force a relation to be vacuumed because\n> of (M)XID wraparound danger, we could first check whether there seems\n> to be any hope of advancing relfrozenxid(minmxid). If not, we discount\n> that as a trigger for vacuum, but may still decide to vacuum if some\n> other trigger warrants it. In most cases, if there's no hope of\n> advancing relfrozenxid, there won't be any bloat to remove either, but\n> aborted transactions are a counterexample. And the XID and MXID\n> horizons can advance at completely different rates.\n\nI think that you'd agree that the arguments in favor of skipping are\nstrongest for an aggressive anti-wraparound autovacuum (as opposed to\nany other kind of aggressive VACUUM, including aggressive autovacuum).\nAside from the big benefit I pointed out already (avoiding blocking\nuseful anti-wraparound vacuums that starts a little later by not\nstarting a conflicting useless anti-wraparound vacuum now), there is\nalso more certainty about downsides. We can know the following things\nfor sure:\n\n* We only launch an (aggressive) anti-wraparound autovacuum because we\nneed to advance relfrozenxid. In other words, if we didn't need to\nadvance relfrozenxid then (for better or worse) we definitely wouldn't\nbe launching anything.\n\n* Our would-be OldestXmin exactly matches the preexisting\npg_class.relfrozenxid (and pg_class.relminmxid). And so it follows\nthat we're definitely not going to be able to do the thing that is\nostensibly the whole point of anti-wraparound vacuum (advance\nrelfrozenxid/relminmxid).\n\n> One reason I haven't pursued this kind of optimization is that it\n> doesn't really feel like it's fixing the whole problem. It would be a\n> little bit sad if we did a perfect job preventing useless vacuuming\n> but still allowed almost-useless vacuuming. Suppose we have a 1TB\n> relation and we trigger autovacuum. It cleans up a few things but\n> relfrozenxid is still old. On the next pass, we see that the\n> system-wide xmin has not advanced, so we don't trigger autovacuum\n> again. Then on the pass after that we see that the system-wide xmin\n> has advanced by 1. Shall we trigger an autovacuum of the whole\n> relation now, to be able to do relfrozenxid++? Seems dubious.\n\nI can see what you mean, but just fixing the most extreme case can be\na useful goal. It's often enough to stop the system from going into a\ntailspin, which is the real underlying goal here. Things that approach\nthe most extreme case (but don't quite hit it) don't have that\nquality.\n\nAn anti-wraparound vacuum is supposed to be a mechanism that the\nsystem escalates to when nothing else triggers an autovacuum worker to\nrun (which is aggressive but not anti-wraparound). That's not really\ntrue in practice, of course; anti-wraparound av often becomes a\nroutine thing. But I think that it's a good ideal to strive for -- it\nshould be rare.\n\nThe draft patch series now adds opportunistic freezing -- I should be\nable to post a new version in a few days time, once I've tied up some\nloose ends. My testing shows an interesting effect, when opportunistic\nfreezing is applied on top of the relfrozenxid thing: every autovacuum\nmanages to advance relfrozenxid, and so we'll never have to run an\naggressive autovacuum (much less an aggressive anti-wraparound\nautovacuum) in practice. And so (for example) when autovacuum runs\nagainst the pgbench_history table, it always sets its relfrozenxid to\na value very close to the OldestXmin -- usually the exact OldestXmin.\n\nOpportunistic freezing makes us avoid setting the all-visible bit for\na heap page without also setting the all-frozen bit -- when we're\nabout to do that, we go freeze the heap tuples and then set the entire\npage all-frozen (so we freeze anything <= OldestXmin, not <=\nFreezeLimit). We also freeze based on this more aggressive <=\nOldestXmin cutoff when pruning had to delete some tuples.\n\nThe patch still needs more polishing, but I think that we can make\nanti-wraparound vacuums truly exceptional with this design -- which\nwould make autovacuum a lot easier to deal with operationally. This\nseems like a feasible goal for Postgres 15, even (though still quite\nambitious). The opportunistic freezing stuff isn't free (the WAL\nrecords aren't tiny), but it's still not all that expensive. Plus I\nthink that the cost can be further reduced, with a little more work.\n\n> Part of the problem here, for both vacuuming-for-bloat and\n> vacuuming-for-relfrozenxid-advancement, we would really like to know\n> the distribution of old XIDs in the table.\n\nWhat I see with the draft patch series is that the oldest XID just\nisn't that old anymore, consistently -- we literally never fail to\nadvance relfrozenxid, in any autovacuum, for any table. And the value\nthat we end up with is consistently quite recent. This is something\nthat I see both with BenchmarkSQL, and pgbench. There is a kind of\nvirtuous circle, which prevents us from ever getting anywhere near\nhaving any table age in the tens of millions of XIDs.\n\nI guess that that makes avoiding useless vacuuming seem like less of a\npriority. ISTM that it should be something that is squarely aimed at\nkeeping things stable in truly pathological cases.\n\n> So I'm not certain of the way forward here. Just because we can't\n> prevent almost-useless vacuuming is not a sufficient reason to\n> continue allowing entirely-useless vacuuming that we can prevent. And\n> it seems like we need a bunch of new bookkeeping to do any better than\n> that, which seems like a lot of work. So maybe it's the most practical\n> path forward for the time being, but it feels like more of a\n> special-purpose kludge than a truly high-quality solution.\n\nI'm sure that either one of us will be able to poke holes in any\ndefinition of \"useless\" that is continuous (rather than discrete) --\nwhich, on reflection, pretty much means any definition that is\nconcerned with bloat. I think that you're right about that: the\nquestion there must be \"why are we even launching these\nbloat-orientated autovacuums that actually find no bloat?\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 14 Dec 2021 10:15:38 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Defining (and possibly skipping) useless VACUUM operations" }, { "msg_contents": "On Tue, Dec 14, 2021 at 1:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that you'd agree that the arguments in favor of skipping are\n> strongest ...\n\nWell I just don't understand why you insist on using the word\n\"skipping.\" I think what we're talking about - or at least what we\nshould be talking about - is whether relation_needs_vacanalyze() sets\n*wraparound = true right after the comment that says /* Force vacuum\nif table is at risk of wraparound */. And adding some kind of\nexception to the logic that's there now.\n\n> What I see with the draft patch series is that the oldest XID just\n> isn't that old anymore, consistently -- we literally never fail to\n> advance relfrozenxid, in any autovacuum, for any table. And the value\n> that we end up with is consistently quite recent. This is something\n> that I see both with BenchmarkSQL, and pgbench. There is a kind of\n> virtuous circle, which prevents us from ever getting anywhere near\n> having any table age in the tens of millions of XIDs.\n\nYeah, I hadn't thought about it from that perspective, but that does\nseem very good. I think it's inevitable that there will be cases where\nthat doesn't work out - e.g. you can always force the bad case by\nholding a table lock until your newborn heads off to college, or just\nby overthrottling autovacuum so that it can't get through the database\nin any reasonable amount of time - but it will be nice when it does\nwork out, for sure.\n\n> I guess that that makes avoiding useless vacuuming seem like less of a\n> priority. ISTM that it should be something that is squarely aimed at\n> keeping things stable in truly pathological cases.\n\nYes. I think \"pathological cases\" is a good summary of what's wrong\nwith autovacuum. When there's nothing too crazy happening, it actually\ndoes pretty well. But, when resources are tight or other corner cases\noccur, really dumb things start to happen. So it's reasonable to think\nabout how we can install guard rails that prevent complete insanity.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Dec 2021 13:46:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defining (and possibly skipping) useless VACUUM operations" }, { "msg_contents": "On Tue, Dec 14, 2021 at 10:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well I just don't understand why you insist on using the word\n> \"skipping.\" I think what we're talking about - or at least what we\n> should be talking about - is whether relation_needs_vacanalyze() sets\n> *wraparound = true right after the comment that says /* Force vacuum\n> if table is at risk of wraparound */. And adding some kind of\n> exception to the logic that's there now.\n\nActually, I agree. Skipping is the wrong term, especially because the\nphrase \"VACUUM skips...\" is already too overloaded. Not necessarily in\nvacuumlazy.c itself, but certainly on the mailing list.\n\n> Yeah, I hadn't thought about it from that perspective, but that does\n> seem very good. I think it's inevitable that there will be cases where\n> that doesn't work out - e.g. you can always force the bad case by\n> holding a table lock until your newborn heads off to college, or just\n> by overthrottling autovacuum so that it can't get through the database\n> in any reasonable amount of time - but it will be nice when it does\n> work out, for sure.\n\nRight. But when the patch doesn't manage to totally prevent\nanti-wraparound VACUUMs, things still work out a lot better than they\nwould now. I would expect that in practice this will usually only\nhappen when non-aggressive autovacuums keep getting canceled. And\nsure, it's still not ideal that things have come to that. But because\nwe now do freezing earlier (when it's relatively inexpensive), and\nbecause we set all-frozen bits incrementally, the anti-wraparound\nautovacuum will at least be able to reuse any freezing that we manage\nto do in all those canceled autovacuums.\n\nI think that this tends to make anti-wraparound VACUUMs mostly about\nnot being cancelable -- not so much about reliably advancing\nrelfrozenxid. I mean it doesn't change the basic rules (there is no\nchange to the definition of aggressive VACUUM), but in practice I\nthink that it'll just work that way. Which makes a great deal of\nsense. I hope to be able to totally get rid of\nvacuum_freeze_table_age.\n\nThe freeze map work in PostgreSQL 9.6 was really great, and very\neffective. But I think that it had an undesirable interaction with\nvacuum_freeze_min_age: if we set a heap page as all-visible (but not\nall frozen) before some of its tuples reached that age (which is very\nlikely), then tuples < vacuum_freeze_min_age aren't going to get\nfrozen until whenever we do an aggressive autovacuum. Very often, this\nwill only happen when we next do an anti-wraparound VACUUM (at least\nbefore Postgres 13). I suspect we risk running into a \"debt cliff\" in\nthe eventual anti-wraparound autovacuum. And so while\nvacuum_freeze_min_age kinda made sense prior to 9.6, it now seems to\nmake a lot less sense.\n\n> > I guess that that makes avoiding useless vacuuming seem like less of a\n> > priority. ISTM that it should be something that is squarely aimed at\n> > keeping things stable in truly pathological cases.\n>\n> Yes. I think \"pathological cases\" is a good summary of what's wrong\n> with autovacuum.\n\nThis is 100% my focus, in general. The main goal of the patch I'm\nworking on isn't so much improving performance as making it more\npredictable over time. Focussing on freezing while costs are low has a\nnatural tendency to spread the costs out over time. The system should\nnever \"get in over its head\" with debt that vacuum is expected to\neventually deal with.\n\n> When there's nothing too crazy happening, it actually\n> does pretty well. But, when resources are tight or other corner cases\n> occur, really dumb things start to happen. So it's reasonable to think\n> about how we can install guard rails that prevent complete insanity.\n\nAnother thing that I really want to stamp out is anything involving a\ntiny, seemingly-insignificant adverse event that has the potential to\ncause disproportionate impact over time. For example, right now a\nnon-aggressive VACUUM will never be able to advance relfrozenxid when\nit cannot get a cleanup lock on one heap page. It's actually extremely\nunlikely that that should have much of any impact, at least when you\ndetermine the new relfrozenxid for the table intelligently. Not\nacquiring one cleanup lock on one heap page on a huge table should not\nhave such an extreme impact.\n\nIt's even worse when the systemic impact over time is considered.\nLet's say you only have a 20% chance of failing to acquire one or more\ncleanup locks during a non-aggressive autovacuum for a given large\ntable, meaning that you'll fail to advance relfrozenxid in at least\n20% of all non-aggressive autovacuums. I think that that might be a\nlot worse than it sounds, because the impact compounds over time --\nI'm not sure that 20% is much worse than 60%, or much better than 5%\n(very hard to model it). But if we make the high-level, abstract idea\nof \"aggressiveness\" more of a continuous thing, and not something\nthat's defined by sharp (and largely meaningless) XID-based cutoffs,\nwe have every chance of nipping these problems in the bud (without\nneeding to model much of anything).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 14 Dec 2021 12:38:00 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Defining (and possibly skipping) useless VACUUM operations" } ]
[ { "msg_contents": "Hi all,\n(CCing some folks who worked on this area lately)\n\nThe following sequence of commands generates an assertion failure, as\nof $subject:\nselect pg_replication_origin_create('popo');\nselect pg_replication_origin_session_setup('popo');\nbegin;\nselect txid_current();\nprepare transaction 'popo'; -- assertion fails\n\nThe problem originates from 1eb6d65, down to 11, where we finish by\ntriggering this assertion before replorigin_session_origin_lsn is not\nvalid:\n+ if (replorigin)\n+ {\n+ Assert(replorigin_session_origin_lsn != InvalidXLogRecPtr);\n+ hdr->origin_lsn = replorigin_session_origin_lsn;\n+ hdr->origin_timestamp = replorigin_session_origin_timestamp;\n+ }\n\nAs far as I understand this code and based on the docs,\npg_replication_origin_xact_setup(), that would set of the session\norigin LSN and timestamp, is an optional choise.\nreplorigin_session_advance() would be a no-op for remote_lsn, and\nlocal_lsn requires an update. Now please note that I am really fluent\nwith this stuff, so feel free to correct me. The intention of the\ncode also seems that XACT_XINFO_HAS_ORIGIN should still be set, but\nwith no data.\n\nAt the end, it seems to me that the assertion could just be dropped as\nper the attached, as we'd finish by setting up origin_lsn and\norigin_timestamp in the 2PC file header with some invalid data. The\nelse block could just be removed, but then one would need to guess\nfrom origin.c that both replorigin_session_* may not be set.\n\nI am not completely sure that this is the right move, though, as\npg_replication_origin_status has a *very* limited amount of tests.\nAdding a test to replorigin.sql with 2PC transactions able to trigger\nthe problem is easy once we rely on a different slot to not influence\nthe existing tests, as the expected contents of\npg_replication_origin_status could be messed up in the follow-up\ntests.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 12:43:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Assertion failure with replication origins and PREPARE TRANSACTIOn" }, { "msg_contents": "On Mon, Dec 13, 2021 at 12:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n> (CCing some folks who worked on this area lately)\n>\n> The following sequence of commands generates an assertion failure, as\n> of $subject:\n> select pg_replication_origin_create('popo');\n> select pg_replication_origin_session_setup('popo');\n> begin;\n> select txid_current();\n> prepare transaction 'popo'; -- assertion fails\n>\n> The problem originates from 1eb6d65, down to 11, where we finish by\n> triggering this assertion before replorigin_session_origin_lsn is not\n> valid:\n> + if (replorigin)\n> + {\n> + Assert(replorigin_session_origin_lsn != InvalidXLogRecPtr);\n> + hdr->origin_lsn = replorigin_session_origin_lsn;\n> + hdr->origin_timestamp = replorigin_session_origin_timestamp;\n> + }\n>\n> As far as I understand this code and based on the docs,\n> pg_replication_origin_xact_setup(), that would set of the session\n> origin LSN and timestamp, is an optional choise.\n> replorigin_session_advance() would be a no-op for remote_lsn, and\n> local_lsn requires an update. Now please note that I am really fluent\n> with this stuff, so feel free to correct me. The intention of the\n> code also seems that XACT_XINFO_HAS_ORIGIN should still be set, but\n> with no data.\n>\n> At the end, it seems to me that the assertion could just be dropped as\n> per the attached, as we'd finish by setting up origin_lsn and\n> origin_timestamp in the 2PC file header with some invalid data.\n\nWhy do we check if replorigin_session_origin_lsn is not invalid data\nonly when PREPARE TRANSACTION? Looking at commit and rollback code, we\ndon't have assertions or checks that check\nreplorigin_session_origin_lsn/timestamp is valid data. So it looks\nlike we accept also invalid data in those cases since\nreplorigin_advance doesn’t move LSN backward while applying a commit\nor rollback record even if LSN is invalid. The same is true for\nPREPARE records, i.g., even if replication origin LSN is invalid, it\ndoesn't go backward. If replication origin LSN and timestamp in commit\nrecord and rollback record must be valid data too, I think we should\nsimilar checks for commit and rollback code and I think the assertions\nwill fail in the case I reported before[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAMuTrXezGqaV1Q5H-Hf%2BzKqGbU8XuCZk9iQMYueF3%2B8w%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 13 Dec 2021 16:30:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with replication origins and PREPARE\n TRANSACTIOn" }, { "msg_contents": "On Mon, Dec 13, 2021 at 04:30:36PM +0900, Masahiko Sawada wrote:\n> Why do we check if replorigin_session_origin_lsn is not invalid data\n> only when PREPARE TRANSACTION?\n\nWell, it does not matter for the case of PREPARE TRANSACTION, does it?\nwe would include values for the the origin LSN and timestamp in\nany case as these are fixed in the 2PC file header.\n\n> Looking at commit and rollback code, we\n> don't have assertions or checks that check\n> replorigin_session_origin_lsn/timestamp is valid data. So it looks\n> like we accept also invalid data in those cases since\n> replorigin_advance doesn’t move LSN backward while applying a commit\n> or rollback record even if LSN is invalid. The same is true for\n> PREPARE records, i.g., even if replication origin LSN is invalid, it\n> doesn't go backward. If replication origin LSN and timestamp in commit\n> record and rollback record must be valid data too, I think we should\n> similar checks for commit and rollback code and I think the assertions\n> will fail in the case I reported before[1].\n\nIt seems to me that the origin LSN and timestamp are optional, so as\none may choose to not call pg_replication_origin_xact_setup() (as said\nin my first message), and we would not require more sanity checks when\nadvancing the replication origin in the commit and rollback code\npaths. Let's see what others think here.\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 17:30:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Assertion failure with replication origins and PREPARE\n TRANSACTIOn" }, { "msg_contents": "On Mon, Dec 13, 2021 at 2:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 13, 2021 at 04:30:36PM +0900, Masahiko Sawada wrote:\n> > Why do we check if replorigin_session_origin_lsn is not invalid data\n> > only when PREPARE TRANSACTION?\n>\n> Well, it does not matter for the case of PREPARE TRANSACTION, does it?\n> we would include values for the the origin LSN and timestamp in\n> any case as these are fixed in the 2PC file header.\n>\n> > Looking at commit and rollback code, we\n> > don't have assertions or checks that check\n> > replorigin_session_origin_lsn/timestamp is valid data. So it looks\n> > like we accept also invalid data in those cases since\n> > replorigin_advance doesn’t move LSN backward while applying a commit\n> > or rollback record even if LSN is invalid. The same is true for\n> > PREPARE records, i.g., even if replication origin LSN is invalid, it\n> > doesn't go backward. If replication origin LSN and timestamp in commit\n> > record and rollback record must be valid data too, I think we should\n> > similar checks for commit and rollback code and I think the assertions\n> > will fail in the case I reported before[1].\n>\n> It seems to me that the origin LSN and timestamp are optional, so as\n> one may choose to not call pg_replication_origin_xact_setup() (as said\n> in my first message), and we would not require more sanity checks when\n> advancing the replication origin in the commit and rollback code\n> paths.\n>\n\nThis is my understanding as well. I think here the point of Sawada-San\nis why to have additional for replorigin_session_origin_lsn in prepare\ncode path? I think the way you have it in your patch is correct as\nwell but it is probably better to keep the check only based on\nreplorigin so as to keep this check consistent in all code paths.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 13 Dec 2021 15:46:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with replication origins and PREPARE\n TRANSACTIOn" }, { "msg_contents": "On Mon, Dec 13, 2021 at 03:46:55PM +0530, Amit Kapila wrote:\n> This is my understanding as well. I think here the point of Sawada-San\n> is why to have additional for replorigin_session_origin_lsn in prepare\n> code path? I think the way you have it in your patch is correct as\n> well but it is probably better to keep the check only based on\n> replorigin so as to keep this check consistent in all code paths.\n\nWell, I don't think that it is a big deal one way or the other, as\nwe'd finish with InvalidXLogRecPtr for the LSN and 0 for the timestamp\nanyway. If both of you feel that just removing the assertion rather\nthan adding an extra check is better, that's fine by me :)\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 19:53:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Assertion failure with replication origins and PREPARE\n TRANSACTIOn" }, { "msg_contents": "On Mon, Dec 13, 2021 at 07:53:43PM +0900, Michael Paquier wrote:\n> Well, I don't think that it is a big deal one way or the other, as\n> we'd finish with InvalidXLogRecPtr for the LSN and 0 for the timestamp\n> anyway. If both of you feel that just removing the assertion rather\n> than adding an extra check is better, that's fine by me :)\n\nLooked at that today, and done this way. The tests have been extended\na bit more with one ROLLBACK and one ROLLBACK PREPARED, while checking\nfor the contents decoded.\n--\nMichael", "msg_date": "Tue, 14 Dec 2021 11:14:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Assertion failure with replication origins and PREPARE\n TRANSACTIOn" } ]
[ { "msg_contents": "I want to propose an implementation of pg_import_system_collations() for\nWIN32 using EnumSystemLocalesEx() [1], which is available from Windows\nServer 2008 onwards.\n\nThe patch includes a test emulating that of collate.linux.utf8, but for\nWindows-1252. The main difference is that it doesn't have the tests for\nTurkish dotted and undotted 'i', since that locale is WIN1254.\n\nI am opening an item in the commitfest for this.\n\n[1]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-enumsystemlocalesex\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Mon, 13 Dec 2021 09:41:10 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "WIN32 pg_import_system_collations" }, { "msg_contents": "On Mon, Dec 13, 2021 at 9:41 AM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\nPer path tester.\n\n\n> Regards,\n>\n> Juan José Santamaría Flecha\n>", "msg_date": "Mon, 13 Dec 2021 17:28:47 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Tue, Dec 14, 2021 at 5:29 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Mon, Dec 13, 2021 at 9:41 AM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:\n> Per path tester.\n\nHi Juan José,\n\nI haven't tested yet but +1 for the feature. I guess the API didn't\nexist at the time collation support was added.\n\n+ /*\n+ * Windows will use hyphens between language and territory, where ANSI\n+ * uses an underscore. Simply make it ANSI looking.\n+ */\n+ hyphen = strchr(localebuf, '-');\n+ if (hyphen)\n+ *hyphen = '_';\n+\n\nThis conversion makes sense, to keep the user experience the same\nacross platforms. Nitpick on the comment: why ANSI? I think we can\ncall \"en_NZ\" a POSIX locale identifier[1], and I think we can call\n\"en-NZ\" a BCP 47 language tag.\n\n+/*\n+ * This test is for Windows/Visual Studio systems and assumes that a full set\n+ * of locales is installed. It must be run in a database with WIN1252 encoding,\n+ * because of the locales' encondings. We lose some interesting cases from the\n+ * UTF-8 version, like Turkish dotted and undotted 'i' or Greek sigma.\n+ */\n\ns/encondings/encodings/\n\nWhen would the full set of locales not be installed on a Windows\nsystem, and why does this need Visual Studio? Wondering if this test\nwill work with some of the frankenstein/cross toolchains tool chains\n(not objecting if it doesn't and could be skipped, just trying to\nunderstand the comment).\n\nSlightly related to this, in case you didn't see it, I'd also like to\nuse BCP 47 tags for the default locale for PostgreSQL 15[2].\n\n[1] https://en.wikipedia.org/wiki/Locale_(computer_software)#POSIX_platforms\n[2] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ%3DXThErgAQRoqfCy1bKPxXVuF0%3D2zDbB%2BSxDs59pv7Fw%40mail.gmail.com\n\n\n", "msg_date": "Tue, 14 Dec 2021 09:53:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Mon, Dec 13, 2021 at 9:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> I haven't tested yet but +1 for the feature. I guess the API didn't\n> exist at the time collation support was added.\n>\n> Good to hear.\n\n\n> This conversion makes sense, to keep the user experience the same\n> across platforms. Nitpick on the comment: why ANSI? I think we can\n> call \"en_NZ\" a POSIX locale identifier[1], and I think we can call\n> \"en-NZ\" a BCP 47 language tag.\n>\n> POSIX also works for me.\n\n\n> When would the full set of locales not be installed on a Windows\n> system, and why does this need Visual Studio? Wondering if this test\n> will work with some of the frankenstein/cross toolchains tool chains\n> (not objecting if it doesn't and could be skipped, just trying to\n> understand the comment).\n>\n> What I meant to say is that to run the test, you need a database that has\nsuccessfully run pg_import_system_collations. This would be also possible\nin Mingw for _WIN32_WINNT> = 0x0600, but the current value in\nsrc\\include\\port\\win32.h is _WIN32_WINNT = 0x0501 when compiling with\nMingw.\n\n\n> Regards,\n\n Juan José Santamaría Flecha\n\nOn Mon, Dec 13, 2021 at 9:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nI haven't tested yet but +1 for the feature.  I guess the API didn't\nexist at the time collation support was added.\nGood to hear. This conversion makes sense, to keep the user experience the same\nacross platforms.   Nitpick on the comment: why ANSI?  I think we can\ncall \"en_NZ\" a POSIX locale identifier[1], and I think we can call\n\"en-NZ\" a BCP 47 language tag.\nPOSIX also works for me. When would the full set of locales not be installed on a Windows\nsystem, and why does this need Visual Studio?  Wondering if this test\nwill work with some of the frankenstein/cross toolchains tool chains\n(not objecting if it doesn't and could be skipped, just trying to\nunderstand the comment).\nWhat I meant to say is that to run the test, you need a database that has successfully run pg_import_system_collations. This would be also possible in Mingw for _WIN32_WINNT> = 0x0600, but the current value in src\\include\\port\\win32.h is _WIN32_WINNT = 0x0501 when compiling with Mingw.  Regards,  Juan José Santamaría Flecha", "msg_date": "Tue, 14 Dec 2021 21:13:52 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Wed, Dec 15, 2021 at 9:14 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> What I meant to say is that to run the test, you need a database that has successfully run pg_import_system_collations. This would be also possible in Mingw for _WIN32_WINNT> = 0x0600, but the current value in src\\include\\port\\win32.h is _WIN32_WINNT = 0x0501 when compiling with Mingw.\n\nAh, right. I hope we can make the leap to 0x0A00 (Win10) soon and\njust stop thinking about these old ghosts, as mentioned by various\npeople in various threads. Do you happen to know if there are\ncomplications for that, with the non-MSVC tool chains?\n\n\n", "msg_date": "Wed, 15 Dec 2021 10:45:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Wed, Dec 15, 2021 at 10:45:28AM +1300, Thomas Munro wrote:\n> Ah, right. I hope we can make the leap to 0x0A00 (Win10) soon and\n> just stop thinking about these old ghosts, as mentioned by various\n> people in various threads.\n\nSeeing your message here.. My apologies for the short digression.\nWould that mean that we could use CreateSymbolicLinkA() as a mapper\nfor pgreadlink() rather than junction points? I am wondering how much\ncode in src/port/ such a move could allow us to do.\n--\nMichael", "msg_date": "Wed, 15 Dec 2021 11:52:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Wed, Dec 15, 2021 at 3:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Dec 15, 2021 at 10:45:28AM +1300, Thomas Munro wrote:\n> > Ah, right. I hope we can make the leap to 0x0A00 (Win10) soon and\n> > just stop thinking about these old ghosts, as mentioned by various\n> > people in various threads.\n>\n> Seeing your message here.. My apologies for the short digression.\n> Would that mean that we could use CreateSymbolicLinkA() as a mapper\n> for pgreadlink() rather than junction points? I am wondering how much\n> code in src/port/ such a move could allow us to do.\n\nSadly, (1) it wouldn't work unless running with a special privilege or\nas admin, and (2) it wouldn't work on non-NTFS filesystems. I think\nit's mostly intended to allow things like unpacking tarballs, checking\nout git repos etc etc etc that came from Unix systems, which is why it\nworks with 'developer mode' enabled[1], though obviously it wouldn't\nbe totally impossible for us to require that privilege. Didn't seem\ngreat to me, though, that's why I gave up on it over in\nhttps://commitfest.postgresql.org/36/3090/ where this was recently\ndiscussed.\n\n[1] https://blogs.windows.com/windowsdeveloper/2016/12/02/symlinks-windows-10/\n\n\n", "msg_date": "Wed, 15 Dec 2021 17:03:30 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "Hi,\n\nOn Mon, Dec 13, 2021 at 05:28:47PM +0100, Juan Jos� Santamar�a Flecha wrote:\n> On Mon, Dec 13, 2021 at 9:41 AM Juan Jos� Santamar�a Flecha <\n> juanjo.santamaria@gmail.com> wrote:\n> \n> Per path tester.\n\nThis version doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3450.log\n=== Applying patches on top of PostgreSQL commit ID e0e567a106726f6709601ee7cffe73eb6da8084e ===\n=== applying patch ./v2-0001-WIN32-pg_import_system_collations.patch\n[...]\npatching file src/tools/msvc/vcregress.pl\nHunk #1 succeeded at 153 (offset -1 lines).\nHunk #2 FAILED at 170.\n1 out of 2 hunks FAILED -- saving rejects to file src/tools/msvc/vcregress.pl.rej\n\nCould you send a rebased version? In the meantime I will switch the CF entry\nto Waiting on Author.\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:53:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "On Wed, Jan 19, 2022 at 10:53 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n>\n> This version doesn't apply anymore:\n>\n> Thanks for the heads up.\n\nPlease find attached a rebased patch. I have also rewritten some comments\nto address previous reviews, code and test remain the same.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 19 Jan 2022 13:24:40 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIN32 pg_import_system_collations" }, { "msg_contents": "Hi Juan José,\n\nI a bit tested this feature and have small doubts about block:\n\n+/*\n+ * Windows will use hyphens between language and territory, where POSIX\n+ * uses an underscore. Simply make it POSIX looking.\n+ */\n+ hyphen = strchr(localebuf, '-');\n+ if (hyphen)\n+ *hyphen = '_';\n\nAfter this block modified collation name is used in function\n\nGetNLSVersionEx(COMPARE_STRING, wide_collcollate, &version)\n\n(see win32_read_locale() -> CollationFromLocale() -> CollationCreate()\ncall). Is it correct to use (wide_collcollate = \"en_NZ\") instead of\n(wide_collcollate = \"en-NZ\") in GetNLSVersionEx() function?\n\n1) Documentation [1], [2], quote:\nIf it is a neutral locale for which the script is significant,\nthe pattern is <language>-<Script>.\n\n2) Conversation [3], David Rowley, quote:\nThen, since GetNLSVersionEx()\nwants yet another variant with a - rather than an _, I've just added a\ncouple of lines to swap the _ for a -.\n\n\nOn my computer (Windows 10 Pro 21H2 19044.1466, MSVC2019 version\n16.11.9) work correctly both variants (\"en_NZ\", \"en-NZ\").\n\nBut David Rowley (MSVC2010 and MSVC2017) replaced \"_\" to \"-\"\nfor the same function. Maybe he had a problem with \"_\" on MSVC2010 or \nMSVC2017?\n\n[1] \nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getnlsversionex\n[2] https://docs.microsoft.com/en-us/windows/win32/intl/locale-names\n[3] \nhttps://www.postgresql.org/message-id/flat/CAApHDvq3FXpH268rt-6sD_Uhe7Ekv9RKXHFvpv%3D%3Duh4c9OeHHQ%40mail.gmail.com\n\nWith best regards,\nDmitry Koval.\n\n\n", "msg_date": "Tue, 25 Jan 2022 10:56:53 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: WIN32 pg_import_system_collations" } ]
[ { "msg_contents": "Hi,\n\nCommit 9556aa01c69 (Use single-byte Boyer-Moore-Horspool search even\nwith multibyte encodings), was a speed improvement for the majority of\ncases, but when the match was toward the end of the string, the\nslowdown in text_position_get_match_pos() was noticeable. It was found\nthat there was a lot of overhead in pg_mblen(). [1]\n\nThe attached exploratory PoC improves this for utf-8. It applies on\ntop v25 of my utf-8 verification patch in [2], since one approach\nrelies on the DFA from it. The other three approaches are:\n- a version of pg_utf_mblen() that uses a lookup table [3]\n- an inlined copy of pg_utf_mblen()\n- an ascii fast path with a fallback to the inlined copy of pg_utf_mblen()\n\nThe test is attached and the test function is part of the patch. It's\nbased on the test used in the commit above. The test searches for a\nstring that's at the end of a ~1 million byte string. This is on gcc\n11 with 2-3 runs to ensure repeatability, but I didn't bother with\nstatistics because the differences are pretty big:\n\n patch | no match | ascii | mulitbyte\n-----------------------------------------+----------+-------+-----------\n PG11 | 1120 | 1100 | 900\n master | 381 | 2350 | 1900\n DFA | 386 | 1640 | 1640\n branchless utf mblen | 387 | 4100 | 2600\n inline pg_utf_mblen() | 380 | 1080 | 920\n inline pg_utf_mblen() + ascii fast path | 382 | 470 | 918\n\nNeither of the branchless approaches worked well. The DFA can't work\nas well here as in verification because it must do additional work.\nInlining pg_utf_mblen() restores worst-case performance to PG11\nlevels. The ascii fast path is a nice improvement on top of that. A\nsimilar approach could work for pg_mbstrlen() as well, but I haven't\nlooked into that yet. There are other callers of pg_mblen(), but I\nhaven't looked into whether they are performance-sensitive. A more\ngeneral application would be preferable to a targeted one.\n\n[1] https://www.postgresql.org/message-id/b65df3d8-1f59-3bd7-ebbe-68b81d5a76a4%40iki.fi\n[2] https://www.postgresql.org/message-id/CAFBsxsHG%3Dg6W8Mie%2B_NO8dV6O0pO2stxrnS%3Dme5ZmGqk--fd5g%40mail.gmail.com\n[3] https://github.com/skeeto/branchless-utf8/blob/master/utf8.h\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Dec 2021 12:02:47 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "speed up text_position() for utf-8" }, { "msg_contents": "I wrote:\n\n> The test is attached and the test function is part of the patch. It's\n> based on the test used in the commit above. The test searches for a\n> string that's at the end of a ~1 million byte string. This is on gcc\n> 11 with 2-3 runs to ensure repeatability, but I didn't bother with\n> statistics because the differences are pretty big:\n>\n> patch | no match | ascii | mulitbyte\n> -----------------------------------------+----------+-------+-----------\n> PG11 | 1120 | 1100 | 900\n> master | 381 | 2350 | 1900\n> DFA | 386 | 1640 | 1640\n> branchless utf mblen | 387 | 4100 | 2600\n> inline pg_utf_mblen() | 380 | 1080 | 920\n> inline pg_utf_mblen() + ascii fast path | 382 | 470 | 918\n\nI failed to mention that the above numbers are milliseconds, so\nsmaller is better.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Dec 2021 12:43:37 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up text_position() for utf-8" }, { "msg_contents": "Attached is a short patch series to develop some ideas of inlining\npg_utf_mblen().\n\n0001 puts the main implementation of pg_utf_mblen() into an inline\nfunction and uses this in pg_mblen(). This is somewhat faster in the\nstrpos tests, so that gives some measure of the speedup expected for\nother callers. Text search seems to call this a lot, so this might\nhave noticeable benefit.\n\n0002 refactors text_position_get_match_pos() to use\npg_mbstrlen_with_len(). This itself is significantly faster when\ncombined with 0001, likely because the latter can inline the call to\npg_mblen(). The intention is to speed up more than just text_position.\n\n0003 explicitly specializes for the inline version of pg_utf_mblen()\ninto pg_mbstrlen_with_len(), but turns out to be almost as slow as\nmaster for ascii. It doesn't help if I undo the previous change in\npg_mblen(), and I haven't investigated why yet.\n\n0002 looks good now, but the experience with 0003 makes me hesitant to\npropose this seriously until I can figure out what's going on there.\n\nThe test is as earlier, a worst-case substring search, times in milliseconds.\n\n patch | no match | ascii | multibyte\n--------+----------+-------+-----------\n PG11 | 1220 | 1220 | 1150\n master | 385 | 2420 | 1980\n 0001 | 390 | 2180 | 1670\n 0002 | 389 | 1330 | 1100\n 0003 | 391 | 2100 | 1360\n\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Dec 2021 17:01:37 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up text_position() for utf-8" }, { "msg_contents": "I wrote:\n\n> 0001 puts the main implementation of pg_utf_mblen() into an inline\n> function and uses this in pg_mblen(). This is somewhat faster in the\n> strpos tests, so that gives some measure of the speedup expected for\n> other callers. Text search seems to call this a lot, so this might\n> have noticeable benefit.\n>\n> 0002 refactors text_position_get_match_pos() to use\n> pg_mbstrlen_with_len(). This itself is significantly faster when\n> combined with 0001, likely because the latter can inline the call to\n> pg_mblen(). The intention is to speed up more than just text_position.\n>\n> 0003 explicitly specializes for the inline version of pg_utf_mblen()\n> into pg_mbstrlen_with_len(), but turns out to be almost as slow as\n> master for ascii. It doesn't help if I undo the previous change in\n> pg_mblen(), and I haven't investigated why yet.\n>\n> 0002 looks good now, but the experience with 0003 makes me hesitant to\n> propose this seriously until I can figure out what's going on there.\n>\n> The test is as earlier, a worst-case substring search, times in milliseconds.\n>\n> patch | no match | ascii | multibyte\n> --------+----------+-------+-----------\n> PG11 | 1220 | 1220 | 1150\n> master | 385 | 2420 | 1980\n> 0001 | 390 | 2180 | 1670\n> 0002 | 389 | 1330 | 1100\n> 0003 | 391 | 2100 | 1360\n\nI tried this test on a newer CPU, and 0003 had no regression. Both\nsystems used gcc 11.2. Rather than try to figure out why an experiment\nhad unexpected behavior, I plan to test 0001 and 0002 on a couple\ndifferent compilers/architectures and call it a day. It's also worth\nnoting that 0002 by itself seemed to be decently faster on the newer\nmachine, but not as fast as 0001 and 0002 together.\n\nLooking at the assembly, pg_mblen is inlined into\npg_mbstrlen_[with_len] and pg_mbcliplen, so the specialization for\nutf-8 in 0001 would be inlined in the other 3 as well. That's only a\nfew bytes, so I think it's fine.\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 14:24:31 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up text_position() for utf-8" }, { "msg_contents": "I wrote:\n\n> I tried this test on a newer CPU, and 0003 had no regression. Both\n> systems used gcc 11.2. Rather than try to figure out why an experiment\n> had unexpected behavior, I plan to test 0001 and 0002 on a couple\n> different compilers/architectures and call it a day. It's also worth\n> noting that 0002 by itself seemed to be decently faster on the newer\n> machine, but not as fast as 0001 and 0002 together.\n\nI tested four machines with various combinations of patches, and it\nseems the only thing they all agree on is that 0002 is a decent\nimprovement (full results attached). The others can be faster or\nslower. 0002 also simplifies things, so it has that going for it. I\nplan to commit that this week unless there are objections.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Feb 2022 15:20:56 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up text_position() for utf-8" } ]
[ { "msg_contents": "Hi,\n\nThe last three buildfarm runs on conchuela show a failure in initdb:\n\nShared object \"libssl.so.48\" not found, required by \"libldap_r-2.4.so.2\"\n\nIt seems likely to me that this is a machine configuration issue\nrather than the result of some recent change in PostgreSQL, because\nthe first failure 2 days ago shows only this as a recent PostgreSQL\nchange:\n\n07eee5a0dc Sat Dec 11 19:10:51 2021 UTC Create a new type category\nfor \"internal use\" types.\n\nAnd that doesn't seem like it could cause this.\n\nAny thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 11:48:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "conchuela has some SSL issues" }, { "msg_contents": "\nOn 12/13/21 11:48, Robert Haas wrote:\n> Hi,\n>\n> The last three buildfarm runs on conchuela show a failure in initdb:\n>\n> Shared object \"libssl.so.48\" not found, required by \"libldap_r-2.4.so.2\"\n>\n> It seems likely to me that this is a machine configuration issue\n> rather than the result of some recent change in PostgreSQL, because\n> the first failure 2 days ago shows only this as a recent PostgreSQL\n> change:\n>\n> 07eee5a0dc Sat Dec 11 19:10:51 2021 UTC Create a new type category\n> for \"internal use\" types.\n>\n> And that doesn't seem like it could cause this.\n>\n> Any thoughts?\n>\n\nIt's also failing on REL_14_STABLE, so I think it must be an\nenvironmental change.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 13 Dec 2021 14:53:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: conchuela has some SSL issues" }, { "msg_contents": "On 2021-12-13 20:53, Andrew Dunstan wrote:\n\n>> Any thoughts?\n>>\n> \n> It's also failing on REL_14_STABLE, so I think it must be an\n> environmental change.\n\nI did an pkg update && pkg upgrade and it messed up the SSL-libraries. \nIt had both libressl and openssl and when I upgraded it some how removed \nlibressl-libraries. One would think it would still work as it would \npick up the openssl library instead. But apparently not.\n\nI will take a look at it.\n\nAt the moment I have disabled new builds until I have fixed it.\n\nSorry for the inconvenience.\n\n/Mikael\n\n\n\n", "msg_date": "Tue, 14 Dec 2021 15:05:12 +0100", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: conchuela has some SSL issues" }, { "msg_contents": "\n\nOn 2021-12-14 15:05, Mikael Kjellström wrote:\n> I will take a look at it.\n> \n> At the moment I have disabled new builds until I have fixed it.\n> \n> Sorry for the inconvenience.\n\nShould be fixed now.\n\n/Mikael\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:47:14 +0100", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: conchuela has some SSL issues" } ]
[ { "msg_contents": "Hello.\n\nAs complained in pgsql-bugs [1], when a process is terminated due to\nmax_slot_wal_keep_size, the related messages don't mention the root\ncause for *the termination*. Note that the third message does not\nshow for temporary replication slots.\n\n[pid=a] LOG: terminating process x to release replication slot \"s\"\n[pid=x] LOG: FATAL: terminating connection due to administrator command\n[pid=a] LOG: invalidting slot \"s\" because its restart_lsn X/X exceeds max_slot_wal_keep_size\n\nThe attached patch attaches a DETAIL line to the first message.\n\n> [17605] LOG: terminating process 17614 to release replication slot \"s1\"\n+ [17605] DETAIL: The slot's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> [17614] FATAL: terminating connection due to administrator command\n> [17605] LOG: invalidating slot \"s1\" because its restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size\n\nSomewhat the second and fourth lines look inconsistent each other but\nthat wouldn't be such a problem. I don't think we want to concatenate\nthe two lines together as the result is a bit too long.\n\n> LOG: terminating process 17614 to release replication slot \"s1\" because it's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n\nWhat do you think about this?\n\n[1] https://www.postgresql.org/message-id/20211214.101137.379073733372253470.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 14 Dec 2021 13:04:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "On Tue, Dec 14, 2021 at 9:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> As complained in pgsql-bugs [1], when a process is terminated due to\n> max_slot_wal_keep_size, the related messages don't mention the root\n> cause for *the termination*. Note that the third message does not\n> show for temporary replication slots.\n>\n> [pid=a] LOG: terminating process x to release replication slot \"s\"\n> [pid=x] LOG: FATAL: terminating connection due to administrator command\n> [pid=a] LOG: invalidting slot \"s\" because its restart_lsn X/X exceeds max_slot_wal_keep_size\n>\n> The attached patch attaches a DETAIL line to the first message.\n>\n> > [17605] LOG: terminating process 17614 to release replication slot \"s1\"\n> + [17605] DETAIL: The slot's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> > [17614] FATAL: terminating connection due to administrator command\n> > [17605] LOG: invalidating slot \"s1\" because its restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size\n>\n> Somewhat the second and fourth lines look inconsistent each other but\n> that wouldn't be such a problem. I don't think we want to concatenate\n> the two lines together as the result is a bit too long.\n>\n> > LOG: terminating process 17614 to release replication slot \"s1\" because it's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n>\n> What do you think about this?\n\nAgree. I think we should also specify the restart_lsn value which\nwould be within max_slot_wal_keep_size for better understanding.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 14 Dec 2021 19:31:21 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "On Tue, Dec 14, 2021 at 9:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> As complained in pgsql-bugs [1], when a process is terminated due to\n> max_slot_wal_keep_size, the related messages don't mention the root\n> cause for *the termination*. Note that the third message does not\n> show for temporary replication slots.\n>\n> [pid=a] LOG: \"terminating process %d to release replication slot \\\"%s\\\"\"\n> [pid=x] LOG: FATAL: terminating connection due to administrator command\n> [pid=a] LOG: invalidting slot \"s\" because its restart_lsn X/X exceeds max_slot_wal_keep_size\n>\n> The attached patch attaches a DETAIL line to the first message.\n>\n> > [17605] LOG: terminating process 17614 to release replication slot \"s1\"\n> + [17605] DETAIL: The slot's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> > [17614] FATAL: terminating connection due to administrator command\n> > [17605] LOG: invalidating slot \"s1\" because its restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size\n>\n> Somewhat the second and fourth lines look inconsistent each other but\n> that wouldn't be such a problem. I don't think we want to concatenate\n> the two lines together as the result is a bit too long.\n>\n> > LOG: terminating process 17614 to release replication slot \"s1\" because it's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n>\n> What do you think about this?\n>\n> [1] https://www.postgresql.org/message-id/20211214.101137.379073733372253470.horikyota.ntt%40gmail.com\n\n+1 to give more context to the \"terminating process %d to release\nreplication slot \\\"%s\\\"\" message.\n\nHow about having below, instead of adding errdetail:\n\"terminating process %d to release replication slot \\\"%s\\\" whose\nrestart_lsn %X/%X exceeds max_slot_wal_keep_size\"?\n\nI think we can keep the \"invalidating slot \\\"%s\\\" because its\nrestart_lsn %X/%X exceeds max_slot_wal_keep_size\" message as-is. We\nmay not see \"terminating process ...\" and \"invalidation slot ...\"\nmessages together for the same slot, so having slightly different\nwording is fine IMO.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 14 Dec 2021 19:43:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Tue, 14 Dec 2021 19:31:21 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in \n> On Tue, Dec 14, 2021 at 9:35 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > [17605] LOG: terminating process 17614 to release replication slot \"s1\"\n> > + [17605] DETAIL: The slot's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> > > [17614] FATAL: terminating connection due to administrator command\n> > > [17605] LOG: invalidating slot \"s1\" because its restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size\n> >\n> > Somewhat the second and fourth lines look inconsistent each other but\n> > that wouldn't be such a problem. I don't think we want to concatenate\n> > the two lines together as the result is a bit too long.\n> >\n> > > LOG: terminating process 17614 to release replication slot \"s1\" because it's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> >\n> > What do you think about this?\n> \n> Agree. I think we should also specify the restart_lsn value which\n> would be within max_slot_wal_keep_size for better understanding.\n\nThanks! It seems to me the main message of the \"invalidating\" log has\nno room for further detail. So I split the reason out to DETAILS line\nthe same way with the \"terminating\" message in the attached second\npatch. (It is separated from the first patch just for review) I\nbelieve someone can make the DETAIL message simpler or more natural.\n\nThe attached patch set emits the following message.\n\n> LOG: invalidating slot \"s1\"\n> DETAIL: The slot's restart_lsn 0/10000D68 is behind the limit 0/11000000 defined by max_slot_wal_keep_size.\n\nThe second line could be changed like the following or anything other.\n\n> DETAIL: The slot's restart_lsn 0/10000D68 got behind the limit 0/11000000 determined by max_slot_wal_keep_size.\n.....\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 15 Dec 2021 13:12:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "On Wed, Dec 15, 2021 at 9:42 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 14 Dec 2021 19:31:21 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in\n> > On Tue, Dec 14, 2021 at 9:35 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > > [17605] LOG: terminating process 17614 to release replication slot \"s1\"\n> > > + [17605] DETAIL: The slot's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> > > > [17614] FATAL: terminating connection due to administrator command\n> > > > [17605] LOG: invalidating slot \"s1\" because its restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size\n> > >\n> > > Somewhat the second and fourth lines look inconsistent each other but\n> > > that wouldn't be such a problem. I don't think we want to concatenate\n> > > the two lines together as the result is a bit too long.\n> > >\n> > > > LOG: terminating process 17614 to release replication slot \"s1\" because it's restart_lsn 0/2C0000A0 exceeds max_slot_wal_keep_size.\n> > >\n> > > What do you think about this?\n> >\n> > Agree. I think we should also specify the restart_lsn value which\n> > would be within max_slot_wal_keep_size for better understanding.\n>\n> Thanks! It seems to me the main message of the \"invalidating\" log has\n> no room for further detail. So I split the reason out to DETAILS line\n> the same way with the \"terminating\" message in the attached second\n> patch. (It is separated from the first patch just for review) I\n> believe someone can make the DETAIL message simpler or more natural.\n>\n> The attached patch set emits the following message.\n>\n> > LOG: invalidating slot \"s1\"\n> > DETAIL: The slot's restart_lsn 0/10000D68 is behind the limit 0/11000000 defined by max_slot_wal_keep_size.\n>\n> The second line could be changed like the following or anything other.\n>\n> > DETAIL: The slot's restart_lsn 0/10000D68 got behind the limit 0/11000000 determined by max_slot_wal_keep_size.\n> .....\n>\n\nThe second version looks better as it gives more details. I am fine\nwith either of the above wordings.\n\nI would prefer everything in the same message though since\n\"invalidating slot ...\" is too short a LOG message. Not everybody\nenabled details always.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 23 Dec 2021 18:08:08 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Thu, 23 Dec 2021 18:08:08 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in \n> On Wed, Dec 15, 2021 at 9:42 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > LOG: invalidating slot \"s1\"\n> > > DETAIL: The slot's restart_lsn 0/10000D68 is behind the limit 0/11000000 defined by max_slot_wal_keep_size.\n> >\n> > The second line could be changed like the following or anything other.\n> >\n> > > DETAIL: The slot's restart_lsn 0/10000D68 got behind the limit 0/11000000 determined by max_slot_wal_keep_size.\n> > .....\n> >\n> \n> The second version looks better as it gives more details. I am fine\n> with either of the above wordings.\n> \n> I would prefer everything in the same message though since\n> \"invalidating slot ...\" is too short a LOG message. Not everybody\n> enabled details always.\n\nMmm. Right. I have gone too much to the same way with the\nprocess-termination message.\n\nI rearranged the meesages as follows in the attached version. (at master)\n\n> LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n\n> LOG: invalidating slot \\\"%s\\\" because its restart_LSN %X/%X exceeds max_slot_wal_keep_size\nc> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n\nThe messages is actually incomplete even in 13 so I think the change\nto the errmsg() message of the first message is worth back-patching.\n\n- v3-0001-Make-a-message-on-process-termination-more-dscrip.patch\n\n Changes only the first main message and it can be back-patched to 14. \n\n- v3-0001-Make-a-message-on-process-termination-more-dscrip_13.patch\n\n The same to the above but for 13, which doesn't have LSN_FORMAT_ARGS.\n\n- v3-0002-Add-detailed-information-to-slot-invalidation-mes.patch\n\n Attaches the DETAIL line shown above to both messages, only for the\n master.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 24 Dec 2021 13:42:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "On Fri, Dec 24, 2021 at 1:42 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 23 Dec 2021 18:08:08 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in\n> > On Wed, Dec 15, 2021 at 9:42 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > > LOG: invalidating slot \"s1\"\n> > > > DETAIL: The slot's restart_lsn 0/10000D68 is behind the limit 0/11000000 defined by max_slot_wal_keep_size.\n> > >\n> > > The second line could be changed like the following or anything other.\n> > >\n> > > > DETAIL: The slot's restart_lsn 0/10000D68 got behind the limit 0/11000000 determined by max_slot_wal_keep_size.\n> > > .....\n> > >\n> >\n> > The second version looks better as it gives more details. I am fine\n> > with either of the above wordings.\n> >\n> > I would prefer everything in the same message though since\n> > \"invalidating slot ...\" is too short a LOG message. Not everybody\n> > enabled details always.\n>\n> Mmm. Right. I have gone too much to the same way with the\n> process-termination message.\n>\n> I rearranged the meesages as follows in the attached version. (at master)\n\nThank you for the patch! +1 for improving the messages.\n\n>\n> > LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> > DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n>\n> > LOG: invalidating slot \\\"%s\\\" because its restart_LSN %X/%X exceeds max_slot_wal_keep_size\n> c> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n\n-\nLSN_FORMAT_ARGS(restart_lsn))));\n+\nLSN_FORMAT_ARGS(restart_lsn)),\n+ errdetail(\"The slot\ngot behind the limit %X/%X determined by max_slot_wal_keep_size.\",\n+\nLSN_FORMAT_ARGS(oldestLSN))));\n\nIsn't oldestLSN calculated not only by max_slot_wal_keep_size but also\nby wal_keep_size?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:06:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Thank you for the comment.\n\nAt Fri, 24 Dec 2021 17:06:57 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> Thank you for the patch! +1 for improving the messages.\n> \n> >\n> > > LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> > > DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n> >\n> > > LOG: invalidating slot \\\"%s\\\" because its restart_LSN %X/%X exceeds max_slot_wal_keep_size\n> > c> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n> \n> -\n> LSN_FORMAT_ARGS(restart_lsn))));\n> +\n> LSN_FORMAT_ARGS(restart_lsn)),\n> + errdetail(\"The slot\n> got behind the limit %X/%X determined by max_slot_wal_keep_size.\",\n> +\n> LSN_FORMAT_ARGS(oldestLSN))));\n> \n> Isn't oldestLSN calculated not only by max_slot_wal_keep_size but also\n> by wal_keep_size?\n\nRight. But I believe the two are not assumed to be used at once. One\ncan set wal_keep_size larger than max_slot_wal_keep_size but it is\nactually a kind of ill setting.\n\nLOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\nDETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size and wal_keep_size.\n\nMmm. I don't like this. I feel we don't need such detail in the\nmessage.. I'd like to hear opinions from others, please.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:30:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "On Fri, Dec 24, 2021 at 5:30 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thank you for the comment.\n>\n> At Fri, 24 Dec 2021 17:06:57 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > Thank you for the patch! +1 for improving the messages.\n> >\n> > >\n> > > > LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> > > > DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n> > >\n> > > > LOG: invalidating slot \\\"%s\\\" because its restart_LSN %X/%X exceeds max_slot_wal_keep_size\n> > > c> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size.\n> >\n> > -\n> > LSN_FORMAT_ARGS(restart_lsn))));\n> > +\n> > LSN_FORMAT_ARGS(restart_lsn)),\n> > + errdetail(\"The slot\n> > got behind the limit %X/%X determined by max_slot_wal_keep_size.\",\n> > +\n> > LSN_FORMAT_ARGS(oldestLSN))));\n> >\n> > Isn't oldestLSN calculated not only by max_slot_wal_keep_size but also\n> > by wal_keep_size?\n>\n> Right. But I believe the two are not assumed to be used at once. One\n> can set wal_keep_size larger than max_slot_wal_keep_size but it is\n> actually a kind of ill setting.\n>\n> LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size and wal_keep_size.\n>\n> Mmm. I don't like this. I feel we don't need such detail in the\n> message.\n\nHow about something like:\n\nLOG: terminating process %d to release replication slot \\\"%s\\\"\nbecause its restart_lsn %X/%X exceeds the limit\nDETAIL: The slot got behind the limit %X/%X\nHINT: You might need to increase max_slot_wal_keep_size or wal_keep_size.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 24 Dec 2021 20:23:29 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Fri, 24 Dec 2021 20:23:29 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Fri, Dec 24, 2021 at 5:30 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Right. But I believe the two are not assumed to be used at once. One\n> > can set wal_keep_size larger than max_slot_wal_keep_size but it is\n> > actually a kind of ill setting.\n> >\n> > LOG: terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\n> > DETAIL: The slot got behind the limit %X/%X determined by max_slot_wal_keep_size and wal_keep_size.\n> >\n> > Mmm. I don't like this. I feel we don't need such detail in the\n> > message.\n> \n> How about something like:\n> \n> LOG: terminating process %d to release replication slot \\\"%s\\\"\n> because its restart_lsn %X/%X exceeds the limit\n> DETAIL: The slot got behind the limit %X/%X\n> HINT: You might need to increase max_slot_wal_keep_size or wal_keep_size.\n\nThe message won't be seen when max_slot_wal_keep_size is not set. So\nwe don't recommend to increase wal_keep_size in that case. We might\nneed inhibit (or warn)the two parameters from being activated at once,\nbut it would be another issue.\n\nAnother point is how people determine the value for the parameter. I\nsuppose (or believe) max_slot_wal_keep_size is not a kind to set to\nminimal first then increase later but a kind to set to maximum\nallowable first. On the other hand we suggest as the follows for\ntoo-small max_wal_size so we could do the same for this parameter.\n\n> HINT: Consider increasing the configuration parameter \\\"max_wal_size\\\".\n\nAlso, I don't like we have three lines for this message. If the DETAIL\nadds only the specific value of the limit, I think it'd better append\nit to the main message.\n\nSo what do you say if I propose the following?\n\nLOG: terminating process %d to release replication slot \\\"%s\\\"\nbecause its restart_lsn %X/%X exceeds the limit %X/%X\nHINT: You might need to increase max_slot_wal_keep_size.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Jan 2022 10:29:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So what do you say if I propose the following?\n> \n> LOG: terminating process %d to release replication slot \\\"%s\\\"\n> because its restart_lsn %X/%X exceeds the limit %X/%X\n> HINT: You might need to increase max_slot_wal_keep_size.\n\nThis version emits the following message.\n\n[35785:checkpointer] LOG: terminating process 36368 to release replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit 0/21000000\n[35785:checkpointer] HINT: You might need to increase max_slot_wal_keep_size.\n[36368:walsender] FATAL: terminating connection due to administrator command\n[36368:walsender] STATEMENT: START_REPLICATION SLOT \"s1\" 0/1F000000 TIMELINE 1\n[35785:checkpointer] LOG: invalidating slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit 0/21000000\n[35785:checkpointer] HINT: You might need to increase max_slot_wal_keep_size.\n\nWe can omit the HINT line from the termination log for non-persistent\nslots but I think we don't want to bother that considering its low\nfrequency.\n\nThe CI was confused by the mixed patches for multiple PG versions. In\nthis version the patchset for master are attached as .patch and that\nfor PG13 as .txt.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n From 6c5a680842521b26c0c899c3d0675bd53e58ac11 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 24 Dec 2021 13:23:54 +0900\nSubject: [PATCH v4] Make a message on process termination more dscriptive\n\nThe message at process termination due to slot limit doesn't provide\nthe reason. In the major scenario the message is followed by another\nmessage about slot invalidatation, which shows the detail for the\ntermination. However the second message is missing if the slot is\ntemporary one.\n\nAugment the first message with the reason same as the second message.\n\nBackpatch through to 13 where the message was introduced.\n\nReported-by: Alex Enachioaie <alex@altmetric.com>\nAuthor: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nReviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nReviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\nDiscussion: https://www.postgresql.org/message-id/17327-89d0efa8b9ae6271%40postgresql.org\nBackpatch-through: 13\n---\n src/backend/replication/slot.c | 6 ++++--\n 1 file changed, 4 insertions(+), 2 deletions(-)\n\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex 02047ea920..15b8934ae2 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -1228,8 +1228,10 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlot *s, XLogRecPtr oldestLSN,\n \t\t\tif (last_signaled_pid != active_pid)\n \t\t\t{\n \t\t\t\tereport(LOG,\n-\t\t\t\t\t\t(errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\",\n-\t\t\t\t\t\t\t\tactive_pid, NameStr(slotname))));\n+\t\t\t\t\t\t(errmsg(\"terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds max_slot_wal_keep_size\",\n+\t\t\t\t\t\t\t\tactive_pid, NameStr(slotname),\n+\t\t\t\t\t\t\t\t(uint32) (restart_lsn >> 32),\n+\t\t\t\t\t\t\t\t(uint32) restart_lsn)));\n \n \t\t\t\t(void) kill(active_pid, SIGTERM);\n \t\t\t\tlast_signaled_pid = active_pid;\n-- \n2.27.0", "msg_date": "Wed, 02 Mar 2022 15:37:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Wed, 02 Mar 2022 15:37:19 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The CI was confused by the mixed patches for multiple PG versions. In\n> this version the patchset for master are attached as .patch and that\n> for PG13 as .txt.\n\nYeah.... It is of course the relevant check should be fixed. The\nattached v5 adjusts 019_replslot_limit.pl.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 02 Mar 2022 17:55:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Hi,\n\nOn 3/2/22 7:37 AM, Kyotaro Horiguchi wrote:\n> At Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro Horiguchi<horikyota.ntt@gmail.com> wrote in\n>> So what do you say if I propose the following?\n>>\n>> LOG: terminating process %d to release replication slot \\\"%s\\\"\n>> because its restart_lsn %X/%X exceeds the limit %X/%X\n>> HINT: You might need to increase max_slot_wal_keep_size.\n> This version emits the following message.\n>\n> [35785:checkpointer] LOG: terminating process 36368 to release replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit 0/21000000\n> [35785:checkpointer] HINT: You might need to increase max_slot_wal_keep_size.\n\nAs the hint is to increase max_slot_wal_keep_size, what about reporting \nthe difference in size (rather than the limit lsn)? Something along \nthose lines?\n\n[35785:checkpointer] LOG: terminating process 36368 to release replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit by <NNN MB>.\n\nRegards,\n\n-- \n\nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 3/2/22 7:37 AM, Kyotaro Horiguchi\n wrote:\n \n\nAt Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n\n\nSo what do you say if I propose the following?\n\nLOG: terminating process %d to release replication slot \\\"%s\\\"\nbecause its restart_lsn %X/%X exceeds the limit %X/%X\nHINT: You might need to increase max_slot_wal_keep_size.\n\n\n\nThis version emits the following message.\n\n[35785:checkpointer] LOG: terminating process 36368 to release replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit 0/21000000\n[35785:checkpointer] HINT: You might need to increase max_slot_wal_keep_size.\n\nAs the hint is to increase max_slot_wal_keep_size, what about\n reporting the difference in size (rather than the limit lsn)?\n Something along those lines?\n\n[35785:checkpointer] LOG: terminating process 36368 to release replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the limit by <NNN MB>.\n\n Regards,\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 5 Sep 2022 11:56:33 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Mon, 5 Sep 2022 11:56:33 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Hi,\n> \n> On 3/2/22 7:37 AM, Kyotaro Horiguchi wrote:\n> > At Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro\n> > Horiguchi<horikyota.ntt@gmail.com> wrote in\n> >> So what do you say if I propose the following?\n> >>\n> >> LOG: terminating process %d to release replication slot \\\"%s\\\"\n> >> because its restart_lsn %X/%X exceeds the limit %X/%X\n> >> HINT: You might need to increase max_slot_wal_keep_size.\n> > This version emits the following message.\n> >\n> > [35785:checkpointer] LOG: terminating process 36368 to release\n> > replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\n> > limit 0/21000000\n> > [35785:checkpointer] HINT: You might need to increase\n> > max_slot_wal_keep_size.\n> \n> As the hint is to increase max_slot_wal_keep_size, what about\n> reporting the difference in size (rather than the limit lsn)?\n> Something along those lines?\n> \n> [35785:checkpointer] LOG: terminating process 36368 to release\n> replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\n> limit by <NNN MB>.\n\nThanks! That might be more sensible exactly for the reason you\nmentioned. One issue doing that is size_pretty is dbsize.c local\nfunction. Since the size is less than kB in many cases, we cannot use\nfixed unit for that.\n\n0001 and 0002 are the same with v5.\n\n0003 exposes byte_size_pretty() to other modules.\n0004 does the change by using byte_size_pretty()\n\nAfter 0004 applied, they look like this.\n\n> LOG: terminating process 108413 to release replication slot \"rep3\" because its restart_lsn 0/7000D8 exceeds the limit by 1024 kB\n> HINT: You might need to increase max_slot_wal_keep_size.\n\nThe reason for \"1024 kB\" instead of \"1 MB\" is the precise value is a\nbit less than 1024 * 1024.\n\n\nregards.\n\n- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 06 Sep 2022 14:53:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Hi,\n\nOn 9/6/22 7:53 AM, Kyotaro Horiguchi wrote:\n> At Mon, 5 Sep 2022 11:56:33 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>> Hi,\n>>\n>> On 3/2/22 7:37 AM, Kyotaro Horiguchi wrote:\n>>> At Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro\n>>> Horiguchi<horikyota.ntt@gmail.com> wrote in\n>>>> So what do you say if I propose the following?\n>>>>\n>>>> LOG: terminating process %d to release replication slot \\\"%s\\\"\n>>>> because its restart_lsn %X/%X exceeds the limit %X/%X\n>>>> HINT: You might need to increase max_slot_wal_keep_size.\n>>> This version emits the following message.\n>>>\n>>> [35785:checkpointer] LOG: terminating process 36368 to release\n>>> replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\n>>> limit 0/21000000\n>>> [35785:checkpointer] HINT: You might need to increase\n>>> max_slot_wal_keep_size.\n>> As the hint is to increase max_slot_wal_keep_size, what about\n>> reporting the difference in size (rather than the limit lsn)?\n>> Something along those lines?\n>>\n>> [35785:checkpointer] LOG: terminating process 36368 to release\n>> replication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\n>> limit by <NNN MB>.\n> Thanks! That might be more sensible exactly for the reason you\n> mentioned. One issue doing that is size_pretty is dbsize.c local\n> function. Since the size is less than kB in many cases, we cannot use\n> fixed unit for that.\n\nThanks for the new patch version!. I did not realized (sorry about that) \nthat we'd need to expose byte_size_pretty(). Now I wonder if we should \nnot simply report the number of bytes (like I can see it is done in many \nplaces). So something like:\n\n@@ -1298,9 +1298,9 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlot *s, \nXLogRecPtr oldestLSN,\n                                 byte_size_pretty(buf, sizeof(buf),\noldestLSN - restart_lsn);\n                                 ereport(LOG,\n- (errmsg(\"terminating process %d to release replication slot \\\"%s\\\" \nbecause its restart_lsn %X/%X exceeds the limit by %s\",\n+ (errmsg(\"terminating process %d to release replication slot \\\"%s\\\" \nbecause its restart_lsn %X/%X exceeds the limit by %lu bytes\",\nactive_pid, NameStr(slotname),\n- LSN_FORMAT_ARGS(restart_lsn), buf),\n+ LSN_FORMAT_ARGS(restart_lsn), oldestLSN - restart_lsn),\n                                                  errhint(\"You might \nneed to increase max_slot_wal_keep_size.\")));\n\nand then forget about exposing/using byte_size_pretty() (that would be \nmore consistent with the same kind of reporting in the existing code).\n\nWhat do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/6/22 7:53 AM, Kyotaro Horiguchi\n wrote:\n\n\nAt Mon, 5 Sep 2022 11:56:33 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n\n\nHi,\n\nOn 3/2/22 7:37 AM, Kyotaro Horiguchi wrote:\n\n\nAt Tue, 04 Jan 2022 10:29:31 +0900 (JST), Kyotaro\nHoriguchi<horikyota.ntt@gmail.com> wrote in\n\n\nSo what do you say if I propose the following?\n\nLOG: terminating process %d to release replication slot \\\"%s\\\"\nbecause its restart_lsn %X/%X exceeds the limit %X/%X\nHINT: You might need to increase max_slot_wal_keep_size.\n\n\nThis version emits the following message.\n\n[35785:checkpointer] LOG: terminating process 36368 to release\nreplication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\nlimit 0/21000000\n[35785:checkpointer] HINT: You might need to increase\nmax_slot_wal_keep_size.\n\n\n\nAs the hint is to increase max_slot_wal_keep_size, what about\nreporting the difference in size (rather than the limit lsn)?\nSomething along those lines?\n\n[35785:checkpointer] LOG: terminating process 36368 to release\nreplication slot \"s1\" because its restart_lsn 0/1F000148 exceeds the\nlimit by <NNN MB>.\n\n\n\nThanks! That might be more sensible exactly for the reason you\nmentioned. One issue doing that is size_pretty is dbsize.c local\nfunction. Since the size is less than kB in many cases, we cannot use\nfixed unit for that.\n\nThanks for the new patch version!. I did not realized (sorry\n about that) that we'd need to expose byte_size_pretty(). Now I\n wonder if we should not simply report the number of bytes (like I\n can see it is done in many places). So something like:\n@@ -1298,9 +1298,9 @@\n InvalidatePossiblyObsoleteSlot(ReplicationSlot *s, XLogRecPtr\n oldestLSN,\n                                 byte_size_pretty(buf, sizeof(buf),\n                                                                 \n oldestLSN - restart_lsn);\n                                 ereport(LOG,\n -                                              \n (errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\n because its restart_lsn %X/%X exceeds the limit by %s\",\n +                                              \n (errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\n because its restart_lsn %X/%X exceeds the limit by %lu bytes\",\n                                                                \n active_pid, NameStr(slotname),\n -                                                              \n LSN_FORMAT_ARGS(restart_lsn), buf),\n +                                                              \n LSN_FORMAT_ARGS(restart_lsn), oldestLSN - restart_lsn),\n                                                  errhint(\"You\n might need to increase max_slot_wal_keep_size.\")));\n\n\nand then forget about exposing/using byte_size_pretty() (that\n would be more consistent with the same kind of reporting in the\n existing code).\n\nWhat do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 6 Sep 2022 10:54:35 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "(I noticed I sent a wrong version..)\n\nAt Tue, 6 Sep 2022 10:54:35 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Thanks for the new patch version!. I did not realized (sorry about\n> that) that we'd need to expose byte_size_pretty(). Now I wonder if we\n\nI didn't think we need the units larger than MB, but I used\npretty_print to prevent small number from rounding to exactly zero. On\nthe other hand, in typical cases it is longer than 6 digits in bytes,\nwhich is a bit hard to read a glance.\n\n> LOG: terminating process 16034 to release replication slot \"rep1\" because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n\n> should not simply report the number of bytes (like I can see it is\n> done in many places). So something like:\n..\n> + (errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\n> because its restart_lsn %X/%X exceeds the limit by %lu bytes\",\n..\n> and then forget about exposing/using byte_size_pretty() (that would be\n> more consistent with the same kind of reporting in the existing code).\n> \n> What do you think?\n\nAn alterntive would be rounding up to the whole MB, or a sub-MB.\n\n> ereport(LOG,\n> (errmsg(\"terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds the limit by %.1lf MB\",\n> active_pid, NameStr(slotname),\n> LSN_FORMAT_ARGS(restart_lsn),\n> /* round-up at sub-MB */\n> ceil((double) (oldestLSN - restart_lsn) / 1024 / 102.4) / 10),\n\n> LOG: terminating process 49539 to release replication slot \"rep1\" because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n\nIf the distance were 1 byte, it is shown as \"0.1 MB\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 07 Sep 2022 11:20:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Hi,\n\nOn 9/7/22 4:20 AM, Kyotaro Horiguchi wrote:\n> (I noticed I sent a wrong version..)\n>\n> At Tue, 6 Sep 2022 10:54:35 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>> Thanks for the new patch version!. I did not realized (sorry about\n>> that) that we'd need to expose byte_size_pretty(). Now I wonder if we\n> I didn't think we need the units larger than MB, but I used\n> pretty_print to prevent small number from rounding to exactly zero.\n\nYeah makes sense.\n\nAlso, rounding to zero wouldn't occur with \"just\" displaying \"oldestLSN \n- restart_lsn\" (as proposed upthread).\n\n> On\n> the other hand, in typical cases it is longer than 6 digits in bytes,\n> which is a bit hard to read a glance.\n\nYeah right, but that's already the case in some part of the code, like \nfor example in arrayfuncs.c:\n\n                 ereport(ERROR,\n                         (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n                          errmsg(\"array size exceeds the maximum allowed \n(%d)\",\n                                 (int) MaxAllocSize)));\n\n>> LOG: terminating process 16034 to release replication slot \"rep1\" because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n>> should not simply report the number of bytes (like I can see it is\n>> done in many places). So something like:\n> ..\n>> + (errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\n>> because its restart_lsn %X/%X exceeds the limit by %lu bytes\",\n> ..\n>> and then forget about exposing/using byte_size_pretty() (that would be\n>> more consistent with the same kind of reporting in the existing code).\n>>\n>> What do you think?\n> An alterntive would be rounding up to the whole MB, or a sub-MB.\n>\n>> ereport(LOG,\n>> (errmsg(\"terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds the limit by %.1lf MB\",\n>> active_pid, NameStr(slotname),\n>> LSN_FORMAT_ARGS(restart_lsn),\n>> /* round-up at sub-MB */\n>> ceil((double) (oldestLSN - restart_lsn) / 1024 / 102.4) / 10),\n\ntypo \"/ 102.4\" ?\n\n>> LOG: terminating process 49539 to release replication slot \"rep1\" because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n> If the distance were 1 byte, it is shown as \"0.1 MB\".\n\nRight and I'm -1 on it, I think we should stick to the \"pretty\" or the \n\"bytes only\" approach (my preference being the bytes only one).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/7/22 4:20 AM, Kyotaro Horiguchi\n wrote:\n\n\n(I noticed I sent a wrong version..)\n\nAt Tue, 6 Sep 2022 10:54:35 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n\n\nThanks for the new patch version!. I did not realized (sorry about\nthat) that we'd need to expose byte_size_pretty(). Now I wonder if we\n\n\n\nI didn't think we need the units larger than MB, but I used\npretty_print to prevent small number from rounding to exactly zero. \n\nYeah makes sense.\n\nAlso, rounding to zero wouldn't occur with \"just\" displaying\n \"oldestLSN - restart_lsn\" (as proposed upthread).\n\n\nOn\nthe other hand, in typical cases it is longer than 6 digits in bytes,\nwhich is a bit hard to read a glance.\n\nYeah right, but that's already the case in some part of the code,\n like for example in arrayfuncs.c:\n                ereport(ERROR,\n                         (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n                          errmsg(\"array size exceeds the maximum\n allowed (%d)\",\n                                 (int) MaxAllocSize)));\n\n\n\n\nLOG: terminating process 16034 to release replication slot \"rep1\" because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n\n\n\n\n\nshould not simply report the number of bytes (like I can see it is\ndone in many places). So something like:\n\n\n..\n\n\n+ (errmsg(\"terminating process %d to release replication slot \\\"%s\\\"\nbecause its restart_lsn %X/%X exceeds the limit by %lu bytes\",\n\n\n..\n\n\nand then forget about exposing/using byte_size_pretty() (that would be\nmore consistent with the same kind of reporting in the existing code).\n\nWhat do you think?\n\n\n\nAn alterntive would be rounding up to the whole MB, or a sub-MB.\n\n\n\n ereport(LOG,\n (errmsg(\"terminating process %d to release replication slot \\\"%s\\\" because its restart_lsn %X/%X exceeds the limit by %.1lf MB\",\n active_pid, NameStr(slotname),\n LSN_FORMAT_ARGS(restart_lsn),\n /* round-up at sub-MB */\n ceil((double) (oldestLSN - restart_lsn) / 1024 / 102.4) / 10),\n\n\ntypo \"/ 102.4\" ?\n\n\n\n\n\nLOG: terminating process 49539 to release replication slot \"rep1\" because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n\n\n\nIf the distance were 1 byte, it is shown as \"0.1 MB\".\n\nRight and I'm -1 on it, I think we should stick to the \"pretty\"\n or the \"bytes only\" approach (my preference being the bytes only\n one).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Sep 2022 12:16:29 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Wed, 7 Sep 2022 12:16:29 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Also, rounding to zero wouldn't occur with \"just\" displaying\n> \"oldestLSN - restart_lsn\" (as proposed upthread).\n..\n> Yeah right, but that's already the case in some part of the code, like\n> for example in arrayfuncs.c:\n\nFair points.\n\n> >> ereport(LOG,\n> >> (errmsg(\"terminating process %d to release replication slot\n> >> \\\"%s\\\" because its restart_lsn %X/%X exceeds the limit by %.1lf\n> >> MB\",\n> >> active_pid, NameStr(slotname),\n> >> LSN_FORMAT_ARGS(restart_lsn),\n> >> /* round-up at sub-MB */\n> >> ceil((double) (oldestLSN - restart_lsn) / 1024 / 102.4) /\n> >> 10),\n> \n> typo \"/ 102.4\" ?\n\nNo, it rounds the difference up to one decimal place. So it is devided\nby 10 after ceil():p\n\n> >> LOG: terminating process 49539 to release replication slot \"rep1\"\n> >> because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n> > If the distance were 1 byte, it is shown as \"0.1 MB\".\n> \n> Right and I'm -1 on it, I think we should stick to the \"pretty\" or the\n> \"bytes only\" approach (my preference being the bytes only one).\n\nOkay. the points you brought up above are sufficient grounds for not\ndoing so. Now they are in the following format.\n\n>> LOG: terminating process 16034 to release replication slot \"rep1\"\n>> because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n\nThank you for the discussion, Bertrand!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 08 Sep 2022 13:40:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Hi,\n\nOn 9/8/22 6:40 AM, Kyotaro Horiguchi wrote:\n> At Wed, 7 Sep 2022 12:16:29 +0200, \"Drouvot, Bertrand\"<bdrouvot@amazon.com> wrote in\n>>>> LOG: terminating process 49539 to release replication slot \"rep1\"\n>>>> because its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n>>> If the distance were 1 byte, it is shown as \"0.1 MB\".\n>> Right and I'm -1 on it, I think we should stick to the \"pretty\" or the\n>> \"bytes only\" approach (my preference being the bytes only one).\n> Okay. the points you brought up above are sufficient grounds for not\n> doing so. Now they are in the following format.\n>\n>>> LOG: terminating process 16034 to release replication slot \"rep1\"\n>>> because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n> Thank you for the discussion, Bertrand!\n\nYou are welcome, thanks for the patch!\n\nIt looks good to me, barring any objections i think we can mark the CF \nentry as Ready for Committer.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/8/22 6:40 AM, Kyotaro Horiguchi\n wrote:\n \n\n\nAt Wed, 7 Sep 2022 12:16:29 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n\n\n\n\nLOG: terminating process 49539 to release replication slot \"rep1\"\nbecause its restart_lsn 0/3038000 exceeds the limit by 15.8 MB\n\n\nIf the distance were 1 byte, it is shown as \"0.1 MB\".\n\n\n\nRight and I'm -1 on it, I think we should stick to the \"pretty\" or the\n\"bytes only\" approach (my preference being the bytes only one).\n\n\n\nOkay. the points you brought up above are sufficient grounds for not\ndoing so. Now they are in the following format.\n\n\n\n\nLOG: terminating process 16034 to release replication slot \"rep1\"\nbecause its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n\n\n\n\nThank you for the discussion, Bertrand!\n\nYou are welcome, thanks for the patch!\n\nIt looks good to me, barring any objections i think we can mark\n the CF entry as Ready for Committer.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 8 Sep 2022 11:29:38 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Okay. the points you brought up above are sufficient grounds for not\n> doing so. Now they are in the following format.\n\n> LOG: terminating process 16034 to release replication slot \"rep1\"\n> because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n\nThis seems to me to be a pretty blatant violation of our first message\nstyle guideline [1]:\n\n The primary message should be short, factual, and avoid reference to\n implementation details such as specific function names. “Short” means\n “should fit on one line under normal conditions”. Use a detail message\n if needed to keep the primary message short ...\n\nI think you should leave the primary message alone and add a DETAIL,\nas the first version of the patch did.\n\nThe existing \"invalidating slot\" message is already in violation\nof this guideline, so splitting off a DETAIL from that seems\nindicated as well.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:30:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "... oh, one other point is that using %ld to print an int64 is entirely\nnot portable, as indeed the cfbot is complaining about.\n\nI think our best practice on that is to put %lld in the format string\nand explicitly cast the corresponding argument to \"long long\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:38:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Wed, 28 Sep 2022 16:30:37 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > Okay. the points you brought up above are sufficient grounds for not\n> > doing so. Now they are in the following format.\n> \n> > LOG: terminating process 16034 to release replication slot \"rep1\"\n> > because its restart_lsn 0/3158000 exceeds the limit by 15368192 bytes\n> \n> This seems to me to be a pretty blatant violation of our first message\n> style guideline [1]:\n\nThanks! It seems that I was waiting for a comment on that line. I\nthought that way at first but finally returned to the current message\nas the result of discussion (in my memory). I will happily make the\nmain message shorter.\n\n> I think you should leave the primary message alone and add a DETAIL,\n> as the first version of the patch did.\n> \n> The existing \"invalidating slot\" message is already in violation\n> of this guideline, so splitting off a DETAIL from that seems\n> indicated as well.\n\nSo I'm going to change the mssage as:\n\nLOG: terminating process %d to release replication slot \\\"%s\\\"\nDETAIL: The slot's restart_lsn %X/%X exceeds the limit by %lld bytes.\nHINT: You might need to increase max_slot_wal_keep_size.\n\nLOG: invalidating *replication* slot \\\"%s\\\"\nDETAILS: (ditto)\nHINTS: (ditto)\n\nIt seems that it's no longer useful to split out the first patch so I\nmerged them into one.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 29 Sep 2022 14:27:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Wed, 28 Sep 2022 16:30:37 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> I think you should leave the primary message alone and add a DETAIL,\n>> as the first version of the patch did.\n\n> So I'm going to change the mssage as:\n\n> LOG: terminating process %d to release replication slot \\\"%s\\\"\n> DETAIL: The slot's restart_lsn %X/%X exceeds the limit by %lld bytes.\n> HINT: You might need to increase max_slot_wal_keep_size.\n\n> LOG: invalidating *replication* slot \\\"%s\\\"\n> DETAILS: (ditto)\n> HINTS: (ditto)\n\nI thought the latter was a little *too* short; the primary message\nshould at least give you some clue why that happened, even if it\ndoesn't offer all the detail. After some thought I changed it to\n\nLOG: invalidating obsolete replication slot \\\"%s\\\"\n\nand pushed it that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 13:31:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Thu, 29 Sep 2022 13:31:00 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Wed, 28 Sep 2022 16:30:37 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > LOG: invalidating *replication* slot \\\"%s\\\"\n> > DETAILS: (ditto)\n> > HINTS: (ditto)\n> \n> I thought the latter was a little *too* short; the primary message\n> should at least give you some clue why that happened, even if it\n> doesn't offer all the detail. After some thought I changed it to\n\nYeah, agreed. It looks better. (I was about to spell it as\n\"invalidating slot \"%s\"\" then changed my mind to add \"replication\". I\nfelt that it is a bit too short but didn't think about further\nstreaching that by adding \"obsolete\"..).\n\n> LOG: invalidating obsolete replication slot \\\"%s\\\"\n> \n> and pushed it that way.\n\nThanks. And thanks for fixing the test script, too.\n\nBy the way, I didn't notice at that time (and forgot about the\npolicy), but the HINT message has variations differing only by the\nvariable name.\n\nWhat do you think about the attached?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 30 Sep 2022 11:15:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> By the way, I didn't notice at that time (and forgot about the\n> policy), but the HINT message has variations differing only by the\n> variable name.\n\n> What do you think about the attached?\n\nHmm, maybe, but a quick grep for 'You might need to increase'\nfinds about a dozen other cases, and none of them are using %s.\nIf we do this we should change all of them, and they probably\nneed \"translator:\" hints. I'm not sure whether abstracting\naway the variable names will make translation harder.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 22:49:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" }, { "msg_contents": "At Thu, 29 Sep 2022 22:49:00 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > By the way, I didn't notice at that time (and forgot about the\n> > policy), but the HINT message has variations differing only by the\n> > variable name.\n> \n> > What do you think about the attached?\n> \n> Hmm, maybe, but a quick grep for 'You might need to increase'\n> finds about a dozen other cases, and none of them are using %s.\n(Mmm. I didn't find others only in po files..)\n> If we do this we should change all of them, and they probably\n> need \"translator:\" hints. I'm not sure whether abstracting\n> away the variable names will make translation harder.\n\nI expect that dedicated po-editing tools can lookup corresponding code\nlines, which gives the answer if no hint is attached at least in this\nspecific case.\n\nAnyway, thinking calmly, since we are not about to edit these\nmessages, it's unlikely to happen on purpose, I think. So I don't mean\nto push this so hard.\n\nThanks for the comment!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:49:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: more descriptive message for process termination due to\n max_slot_wal_keep_size" } ]
[ { "msg_contents": "Hi\r\n\r\nI did the following steps on PG.\r\n\r\n1. Building a synchronous streaming replication environment.\r\n2. Executing the following SQL statements on primary\r\n (1) postgres=# CREATE EXTENSION pageinspect;\r\n (2) postgres=# begin;\r\n (3) postgres=# select txid_current();\r\n (4) postgres=# create table mytest6(i int);\r\n (6) postgres=# insert into mytest6 values(1);\r\n (7) postgres=# commit;\r\n3. Executing the following SQL statements on standby\r\n (8) postgres=# select * from mytest6;\r\n i \r\n ---\r\n 1\r\n (1 row)\r\n (9) postgres=# SELECT t_infomask FROM heap_page_items(get_raw_page('pg_class', 0)) where t_xmin=502※;\r\n   t_infomask \r\n ------------\r\n 2049\r\n (1 row)\r\n ※502 is the transaction ID returned by step (3) above.\r\n\r\nIn the result of step (9),the value of the t_infomask field is 2049(0x801) which means that HEAP_XMAX_INVALID \r\nand HEAP_HASNULL flags were setted, but HEAP_XMIN_COMMITTED flag was not setted.\r\n\r\nAccording to source , when step (8) was executed,SetHintBits function were called to set HEAP_XMIN_COMMITTED.\r\nhowever, the minRecoveryPoint value was not updated. So HEAP_XMIN_COMMITTED flag was not setted successfully.\r\n\r\nAfter CheckPoint, select from mytest6 again in another session, we can see HEAP_XMIN_COMMITTED flag was setted.\r\n\r\nSo my question is that before checkpoint, HEAP_XMIN_COMMITTED flag was not setted correctly, right?\r\n\r\nOr we need to move minRecoveryPoint forword to make HEAP_XMIN_COMMITTED flag setted correctly when first select\r\nfrom mytest6.\r\n\r\n\r\nBest Regards, LiuHuailing\r\n-- \r\n以上\r\nLiu Huailing\r\n--------------------------------------------------\r\nLiu Huailing\r\nDevelopment Department III\r\nSoftware Division II\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China \r\nTEL : +86+25-86630566-8439\r\nCOINS: 7998-8439\r\nFAX : +86+25-83317685\r\nMAIL : liuhuailing@cn.fujitsu.com\r\n--------------------------------------------------\r\n\r\n", "msg_date": "Tue, 14 Dec 2021 08:54:57 +0000", "msg_from": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com>", "msg_from_op": true, "msg_subject": "Question about HEAP_XMIN_COMMITTED" } ]
[ { "msg_contents": "Hi hackers,\n\nWhen I doing development based by PG, I found the following comment have a\nlittle problem in file src/include/catalog/pg_class.h.\n\n/*\n * an explicitly chosen candidate key's columns are used as replica identity.\n * Note this will still be set if the index has been dropped; in that case it\n * has the same meaning as 'd'.\n */\n#define\t\t REPLICA_IDENTITY_INDEX\t'i'\n\nThe last sentence makes me a little confused :\n[......in that case it as the same meaning as 'd'.]\n\nNow, pg-doc didn't have a clear style to describe this.\n\n\nBut if I drop relation's replica identity index like the comment, the action\nis not as same as default.\n\nFor example:\nExecute the following SQL:\ncreate table tbl (col1 int primary key, col2 int not null);\ncreate unique INDEX ON tbl(col2);\nalter table tbl replica identity using INDEX tbl_col2_idx;\ndrop index tbl_col2_idx;\ncreate publication pub for table tbl;\ndelete from tbl;\n\nActual result:\nERROR: cannot delete from table \"tbl\" because it does not have a replica identity and publishes deletes\nHINT: To enable deleting from the table, set REPLICA IDENTITY using ALTER TABLE.\n\nExpected result in comment:\nDELETE 0\n\n\nI found that in the function CheckCmdReplicaIdentity, the operation described\nin the comment is not considered,\nWhen relation's replica identity index is found to be InvalidOid, an error is\nreported.\n\nAre the comment here not accurate enough?\nOr we need to adjust the code according to the comments?\n\n\nRegards,\nWang wei\n\n\n", "msg_date": "Tue, 14 Dec 2021 12:38:28 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 14, 2021 at 6:08 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> When I doing development based by PG, I found the following comment have a\n> little problem in file src/include/catalog/pg_class.h.\n>\n> /*\n> * an explicitly chosen candidate key's columns are used as replica identity.\n> * Note this will still be set if the index has been dropped; in that case it\n> * has the same meaning as 'd'.\n> */\n> #define REPLICA_IDENTITY_INDEX 'i'\n>\n> The last sentence makes me a little confused :\n> [......in that case it as the same meaning as 'd'.]\n>\n> Now, pg-doc didn't have a clear style to describe this.\n>\n>\n> But if I drop relation's replica identity index like the comment, the action\n> is not as same as default.\n>\n> For example:\n> Execute the following SQL:\n> create table tbl (col1 int primary key, col2 int not null);\n> create unique INDEX ON tbl(col2);\n> alter table tbl replica identity using INDEX tbl_col2_idx;\n> drop index tbl_col2_idx;\n> create publication pub for table tbl;\n> delete from tbl;\n>\n> Actual result:\n> ERROR: cannot delete from table \"tbl\" because it does not have a replica identity and publishes deletes\n> HINT: To enable deleting from the table, set REPLICA IDENTITY using ALTER TABLE.\n\nI think I see where's the confusion. The table has a primary key and\nso when the replica identity index is dropped, per the comment in\ncode, you expect that primary key will be used as replica identity\nsince that's what 'd' or default means.\n\n>\n> Expected result in comment:\n> DELETE 0\n>\n>\n> I found that in the function CheckCmdReplicaIdentity, the operation described\n> in the comment is not considered,\n> When relation's replica identity index is found to be InvalidOid, an error is\n> reported.\n\nThis code in RelationGetIndexList() is not according to that comment.\n\n if (replident == REPLICA_IDENTITY_DEFAULT && OidIsValid(pkeyIndex))\n relation->rd_replidindex = pkeyIndex;\n else if (replident == REPLICA_IDENTITY_INDEX && OidIsValid(candidateIndex))\n relation->rd_replidindex = candidateIndex;\n else\n relation->rd_replidindex = InvalidOid;\n\n>\n> Are the comment here not accurate enough?\n> Or we need to adjust the code according to the comments?\n>\n\nComment in code is one thing, but I think PG documentation is not\ncovering the use case you tried. What happens when a replica identity\nindex is dropped has not been covered either in ALTER TABLE\nhttps://www.postgresql.org/docs/13/sql-altertable.html or DROP INDEX\nhttps://www.postgresql.org/docs/14/sql-dropindex.html documentation.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 14 Dec 2021 19:10:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 14, 2021 at 07:10:49PM +0530, Ashutosh Bapat wrote:\n> This code in RelationGetIndexList() is not according to that comment.\n> \n> if (replident == REPLICA_IDENTITY_DEFAULT && OidIsValid(pkeyIndex))\n> relation->rd_replidindex = pkeyIndex;\n> else if (replident == REPLICA_IDENTITY_INDEX && OidIsValid(candidateIndex))\n> relation->rd_replidindex = candidateIndex;\n> else\n> relation->rd_replidindex = InvalidOid;\n\nYeah, the comment is wrong. If the index of a REPLICA_IDENTITY_INDEX\nis dropped, I recall that the behavior is the same as\nREPLICA_IDENTITY_NOTHING.\n\n> Comment in code is one thing, but I think PG documentation is not\n> covering the use case you tried. What happens when a replica identity\n> index is dropped has not been covered either in ALTER TABLE\n> https://www.postgresql.org/docs/13/sql-altertable.html or DROP INDEX\n> https://www.postgresql.org/docs/14/sql-dropindex.html documentation.\n\nNot sure about the DROP INDEX page, but I'd be fine with mentioning\nthat in the ALTER TABLE page in the paragraph related to REPLICA\nIDENTITY. While on it, I would be tempted to switch this stuff to use\na list of <variablelist> for all the option values. That would be\nmuch easier to read.\n\n[ ... thinks a bit ... ]\n\nFWIW, this brings back some memories, as of this thread:\nhttps://www.postgresql.org/message-id/20200522035028.GO2355@paquier.xyz\n\nSee also commit fe7fd4e from August 2020, where some tests have been\nadded. I recall seeing this incorrect comment from last year's\nthread and it may have been mentioned in one of the surrounding\nthreads.. Maybe I just let it go back then. I don't know.\n--\nMichael", "msg_date": "Wed, 15 Dec 2021 12:24:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 15, 2021 at 11:25AM, Michael Paquier wrote:\n> Yeah, the comment is wrong. If the index of a REPLICA_IDENTITY_INDEX is\n> dropped, I recall that the behavior is the same as REPLICA_IDENTITY_NOTHING.\n\nThank you for your response.\nI agreed that the comment is wrong.\n\n\n> Not sure about the DROP INDEX page, but I'd be fine with mentioning that in the\n> ALTER TABLE page in the paragraph related to REPLICA IDENTITY. While on it, I\n> would be tempted to switch this stuff to use a list of <variablelist> for all the option\n> values. That would be much easier to read.\n\nYeah, if we can add some details to pg-doc and code comments, I think it will\nbe more friendly to PG users and developers.\n\nRegards,\nWang wei\n\n\n", "msg_date": "Wed, 15 Dec 2021 09:18:26 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Confused comment about drop replica identity index" }, { "msg_contents": "On Wed, Dec 15, 2021 at 09:18:26AM +0000, wangw.fnst@fujitsu.com wrote:\n> Yeah, if we can add some details to pg-doc and code comments, I think it will\n> be more friendly to PG users and developers.\n\nWould you like to write a patch to address all that?\nThanks,\n--\nMichael", "msg_date": "Thu, 16 Dec 2021 07:39:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 16, 2021 at 06:40AM, Michael Paquier wrote:\n> Would you like to write a patch to address all that?\n\nOK, I will push it soon.\n\n\nRegards,\nWang wei\n\n\n", "msg_date": "Thu, 16 Dec 2021 02:27:06 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Confused comment about drop replica identity index" }, { "msg_contents": "On 2021-Dec-15, Michael Paquier wrote:\n\n> On Tue, Dec 14, 2021 at 07:10:49PM +0530, Ashutosh Bapat wrote:\n> > This code in RelationGetIndexList() is not according to that comment.\n> > \n> > if (replident == REPLICA_IDENTITY_DEFAULT && OidIsValid(pkeyIndex))\n> > relation->rd_replidindex = pkeyIndex;\n> > else if (replident == REPLICA_IDENTITY_INDEX && OidIsValid(candidateIndex))\n> > relation->rd_replidindex = candidateIndex;\n> > else\n> > relation->rd_replidindex = InvalidOid;\n> \n> Yeah, the comment is wrong. If the index of a REPLICA_IDENTITY_INDEX\n> is dropped, I recall that the behavior is the same as\n> REPLICA_IDENTITY_NOTHING.\n\nHmm, so if a table has REPLICA IDENTITY INDEX and there is a publication\nwith an explicit column list, then we need to forbid the DROP INDEX for\nthat index.\n\nI wonder why don't we just forbid DROP INDEX of an index that's been\ndefined as replica identity. It seems quite silly an operation to\nallow.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Thu, 16 Dec 2021 15:08:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Thu, Dec 16, 2021 at 03:08:46PM -0300, Alvaro Herrera wrote:\n> Hmm, so if a table has REPLICA IDENTITY INDEX and there is a publication\n> with an explicit column list, then we need to forbid the DROP INDEX for\n> that index.\n\nHmm. I have not followed this thread very closely.\n\n> I wonder why don't we just forbid DROP INDEX of an index that's been\n> defined as replica identity. It seems quite silly an operation to\n> allow.\n\nThe commit logs talk about b23b0f55 here for this code, to ease the\nhandling of relcache entries for rd_replidindex. 07cacba is the\norigin of the logic (see RelationGetIndexList). Andres?\n\nI don't think that this is really an argument against putting more\nrestrictions as anything that deals with an index drop, including the\ninternal ones related to constraints, would need to go through\nindex_drop(), and new features may want more restrictions in place as\nyou say.\n\nNow, I don't see a strong argument in changing this behavior either\n(aka I have not looked at what this implies for the new publication\ntypes), and we still need to do something for the comment/docs in\nexisting branches, anyway. So I would still fix this gap as a first\nstep, then deal with the rest on HEAD as necessary.\n--\nMichael", "msg_date": "Fri, 17 Dec 2021 08:55:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Thu, Dec 16, 2021, at 8:55 PM, Michael Paquier wrote:\n> On Thu, Dec 16, 2021 at 03:08:46PM -0300, Alvaro Herrera wrote:\n> > Hmm, so if a table has REPLICA IDENTITY INDEX and there is a publication\n> > with an explicit column list, then we need to forbid the DROP INDEX for\n> > that index.\n> \n> Hmm. I have not followed this thread very closely.\n> \n> > I wonder why don't we just forbid DROP INDEX of an index that's been\n> > defined as replica identity. It seems quite silly an operation to\n> > allow.\nIt would avoid pilot errors.\n\n> The commit logs talk about b23b0f55 here for this code, to ease the\n> handling of relcache entries for rd_replidindex. 07cacba is the\n> origin of the logic (see RelationGetIndexList). Andres?\n> \n> I don't think that this is really an argument against putting more\n> restrictions as anything that deals with an index drop, including the\n> internal ones related to constraints, would need to go through\n> index_drop(), and new features may want more restrictions in place as\n> you say.\n> \n> Now, I don't see a strong argument in changing this behavior either\n> (aka I have not looked at what this implies for the new publication\n> types), and we still need to do something for the comment/docs in\n> existing branches, anyway. So I would still fix this gap as a first\n> step, then deal with the rest on HEAD as necessary.\n> \nI've never understand the weak dependency between the REPLICA IDENTITY and the\nindex used by it. I'm afraid we will receive complaints about this unexpected\nbehavior (my logical replication setup is broken because I dropped an index) as\nfar as new logical replication features are added. Row filtering imposes some\nrestrictions in UPDATEs and DELETEs (an error message is returned and the\nreplication stops) if a column used in the expression isn't part of the REPLICA\nIDENTITY anymore.\n\nIt seems we already have some code in RangeVarCallbackForDropRelation() that\ndeals with a system index error condition. We could save a syscall and provide\na test for indisreplident there.\n\nIf this restriction is undesirable, we should at least document this choice and\nprobably emit a WARNING for DROP INDEX.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Dec 16, 2021, at 8:55 PM, Michael Paquier wrote:On Thu, Dec 16, 2021 at 03:08:46PM -0300, Alvaro Herrera wrote:> Hmm, so if a table has REPLICA IDENTITY INDEX and there is a publication> with an explicit column list, then we need to forbid the DROP INDEX for> that index.Hmm.  I have not followed this thread very closely.> I wonder why don't we just forbid DROP INDEX of an index that's been> defined as replica identity.  It seems quite silly an operation to> allow.It would avoid pilot errors.The commit logs talk about b23b0f55 here for this code, to ease thehandling of relcache entries for rd_replidindex.  07cacba is theorigin of the logic (see RelationGetIndexList).  Andres?I don't think that this is really an argument against putting morerestrictions as anything that deals with an index drop, including theinternal ones related to constraints, would need to go throughindex_drop(), and new features may want more restrictions in place asyou say.Now, I don't see a strong argument in changing this behavior either(aka I have not looked at what this implies for the new publicationtypes), and we still need to do something for the comment/docs inexisting branches, anyway.  So I would still fix this gap as a firststep, then deal with the rest on HEAD as necessary.I've never understand the weak dependency between the REPLICA IDENTITY and theindex used by it. I'm afraid we will receive complaints about this unexpectedbehavior (my logical replication setup is broken because I dropped an index) asfar as new logical replication features are added.  Row filtering imposes somerestrictions in UPDATEs and DELETEs (an error message is returned and thereplication stops) if a column used in the expression isn't part of the REPLICAIDENTITY anymore.It seems we already have some code in RangeVarCallbackForDropRelation() thatdeals with a system index error condition. We could save a syscall and providea test for indisreplident there.If this restriction is undesirable, we should at least document this choice andprobably emit a WARNING for DROP INDEX.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 16 Dec 2021 21:31:15 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 16, 2021 at 10:27AM, Michael Paquier wrote:\n> On Tue, Dec 16, 2021 at 06:40AM, Michael Paquier wrote:\n> > Would you like to write a patch to address all that?\n> \n> OK, I will push it soon.\n\n\nHere is a patch to correct wrong comment about REPLICA_IDENTITY_INDEX, And improve the pg-doc.\n\n\nRegards,\nWang wei", "msg_date": "Mon, 20 Dec 2021 03:46:13 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Confused comment about drop replica identity index" }, { "msg_contents": "On Mon, Dec 20, 2021 at 03:46:13AM +0000, wangw.fnst@fujitsu.com wrote:\n> Here is a patch to correct wrong comment about\n> REPLICA_IDENTITY_INDEX, And improve the pg-doc.\n\nThat's mostly fine. I have made some adjustments as per the\nattached.\n\n+ The default for non-system tables. Records the old values of the columns\n+ of the primary key, if any. The default for non-system tables. \nThe same sentence is repeated twice.\n\n+ Records no information about the old row.(This is the\ndefault for system tables.)\nFor consistency with the rest, this could drop the parenthesis for the\nsecond sentence.\n\n+ <term><literal>USING INDEX index_name</literal></term>\nThis should use <replaceable> as markup for index_name.\n\nPondering more about this thread, I don't think we should change the\nexisting behavior in the back-branches, but I don't have any arguments\nabout doing such changes on HEAD to help the features being worked\non, either. So I'd like to apply and back-patch the attached, as a\nfirst step, to fix the inconsistency.\n--\nMichael", "msg_date": "Mon, 20 Dec 2021 20:11:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Mon, Dec 20, 2021, at 8:11 AM, Michael Paquier wrote:\n> On Mon, Dec 20, 2021 at 03:46:13AM +0000, wangw.fnst@fujitsu.com wrote:\n> > Here is a patch to correct wrong comment about\n> > REPLICA_IDENTITY_INDEX, And improve the pg-doc.\n> \n> That's mostly fine. I have made some adjustments as per the\n> attached.\nYour patch looks good to me.\n\n> Pondering more about this thread, I don't think we should change the\n> existing behavior in the back-branches, but I don't have any arguments\n> about doing such changes on HEAD to help the features being worked\n> on, either. So I'd like to apply and back-patch the attached, as a\n> first step, to fix the inconsistency.\n> \nWhat do you think about the attached patch? It forbids the DROP INDEX. We might\nadd a detail message but I didn't in this patch.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/", "msg_date": "Mon, 20 Dec 2021 11:57:32 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Mon, Dec 20, 2021 at 11:57:32AM -0300, Euler Taveira wrote:\n> What do you think about the attached patch? It forbids the DROP INDEX. We might\n> add a detail message but I didn't in this patch.\n\nYeah. I'd agree about doing something like that on HEAD, and that\nwould help with some of the logirep-related patch currently being\nworked on, as far as I understood.\n--\nMichael", "msg_date": "Tue, 21 Dec 2021 09:46:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tue, Dec 20, 2021 at 19:11PM, Michael Paquier wrote:\n> That's mostly fine. I have made some adjustments as per the attached.\n\nThanks for reviewing.\n\n\n> + The default for non-system tables. Records the old values of the columns\n> + of the primary key, if any. The default for non-system tables.\n> The same sentence is repeated twice.\n> \n> + Records no information about the old row.(This is the\n> default for system tables.)\n> For consistency with the rest, this could drop the parenthesis for the second\n> sentence.\n> \n> + <term><literal>USING INDEX index_name</literal></term>\n> This should use <replaceable> as markup for index_name.\n\nThe change looks good to me.\n\n\n\nRegards,\nWang wei\n\n\n", "msg_date": "Tue, 21 Dec 2021 01:31:42 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Confused comment about drop replica identity index" }, { "msg_contents": "On Mon, Dec 20, 2021 at 11:57:32AM -0300, Euler Taveira wrote:\n> On Mon, Dec 20, 2021, at 8:11 AM, Michael Paquier wrote:\n>> That's mostly fine. I have made some adjustments as per the\n>> attached.\n>\n> Your patch looks good to me.\n\nThanks. I have done this part for now.\n--\nMichael", "msg_date": "Wed, 22 Dec 2021 16:40:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" }, { "msg_contents": "On Tues, Dec 21, 2021 8:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Dec 20, 2021 at 11:57:32AM -0300, Euler Taveira wrote:\n> > What do you think about the attached patch? It forbids the DROP INDEX.\n> > We might add a detail message but I didn't in this patch.\n> \n> Yeah. I'd agree about doing something like that on HEAD, and that would help\n> with some of the logirep-related patch currently being worked on, as far as I\n> understood.\n\nHi,\n\nI think forbids DROP INDEX might not completely solve this problem. Because\nuser could still use other command to delete the index, for example: ALTER\nTABLE DROP COLUMN. After dropping the column, the index on it will also be\ndropped.\n\nBesides, user can also ALTER REPLICA IDENTITY USING INDEX \"primary key\", and in\nthis case, when they ALTER TABLE DROP CONSTR \"PRIMARY KEY\", the replica\nidentity index will also be dropped.\n\nBest regards,\nHou zj\n\n\n\n", "msg_date": "Thu, 30 Dec 2021 06:45:30 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Confused comment about drop replica identity index" }, { "msg_contents": "On Thu, Dec 30, 2021 at 06:45:30AM +0000, houzj.fnst@fujitsu.com wrote:\n> I think forbids DROP INDEX might not completely solve this problem. Because\n> user could still use other command to delete the index, for example: ALTER\n> TABLE DROP COLUMN. After dropping the column, the index on it will also be\n> dropped.\n> \n> Besides, user can also ALTER REPLICA IDENTITY USING INDEX \"primary key\", and in\n> this case, when they ALTER TABLE DROP CONSTR \"PRIMARY KEY\", the replica\n> identity index will also be dropped.\n\nIndexes related to any other object type, like constraints, are\ndropped as part of index_drop() as per the handling of dependencies.\nSo, by putting a restriction there, any commands would take this code\npath, and fail when trying to drop an index used as a replica\nidentity. Why would that be logically a problem? We may want errors\nwith more context for such cases, though, as complaining about an\nobject not directly known by the user when triggering a different\ncommand, like a constraint index, could be confusing.\n--\nMichael", "msg_date": "Mon, 3 Jan 2022 16:47:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Confused comment about drop replica identity index" } ]
[ { "msg_contents": "Hi,\n\nPlease refer to this scenario where pg_upgrade operation is failing if \nthe table is create in single-user mode.\n\nPG v13\n--connect to PG v13 using single user mode  ( ./postgres --single -D \n/tmp/data13 postgres )\n--create table ( backend> create table r(n int); )\n--exit  ( ctrl + D)\n\n-- Perform pg_upgrade ( PG v13->PG v15)  )(./pg_upgrade -d data13 -D \ndata15 -b /usr/psql-12/bin -B . )\n\nit will fail with these messages\n\nRestoring global objects in the new cluster                 ok\nRestoring database schemas in the new cluster\n   postgres\n*failure*\n\nConsult the last few lines of \"pg_upgrade_dump_14174.log\" for\nthe probable cause of the failure.\nFailure, exiting\n\n--cat pg_upgrade_dump_14174.log\n--\n--\n--\n--\npg_restore: creating TABLE \"public.r\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 200; 1259 14180 TABLE r edb\npg_restore: error: could not execute query: ERROR:  pg_type array OID \nvalue not set when in binary upgrade mode\nCommand was:\n-- For binary upgrade, must preserve pg_type oid\nSELECT \npg_catalog.binary_upgrade_set_next_pg_type_oid('14181'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_class oids and relfilenodes\nSELECT \npg_catalog.binary_upgrade_set_next_heap_pg_class_oid('14180'::pg_catalog.oid);\nSELECT \npg_catalog.binary_upgrade_set_next_heap_relfilenode('14180'::pg_catalog.oid);\n\nCREATE TABLE \"public\".\"r\" (\n     \"n\" integer\n);\n\n-- For binary upgrade, set heap's relfrozenxid and relminmxid\nUPDATE pg_catalog.pg_class\nSET relfrozenxid = '492', relminmxid = '1'\nWHERE oid = '\"public\".\"r\"'::pg_catalog.regclass;\n\nIs it expected ?\n\n-- \nregards,tushar\nEnterpriseDB  https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 14 Dec 2021 23:27:00 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pg_upgrade operation failed if table created in --single user mode" }, { "msg_contents": "tushar <tushar.ahuja@enterprisedb.com> writes:\n> Please refer to this scenario where pg_upgrade operation is failing if \n> the table is create in single-user mode.\n\nFixed in v14:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=f7f70d5e2\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 13:41:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade operation failed if table created in --single user\n mode" } ]
[ { "msg_contents": "Hi,\n\nThere are some things about the life cycle of the common TupleDesc\nthat I'm not 100% sure about.\n\n1. In general, if you get one from relcache or typcache, it is\n reference-counted, right?\n\n2. The only exception I know of is if you ask the typcache for\n a blessed one (RECORD+typmod), and it is found in the shared-memory\n cache. It is not refcounted then. If it is found only locally,\n it is refcounted.\n\n3. Is that shared case the only way you could see a non-refcounted\n TupleDesc handed to you by the typcache?\n\n4. How long can such a non-refcounted TupleDesc from the typcache\n be counted on to live?\n\n There is a comment in expandedrecord.c: \"It it's not refcounted, just\n assume it will outlive the expanded object.\"\n\n That sounds like confidence, but is it confidence in the longevity\n of shared TupleDescs, or in the fleeting lives of expanded records?\n\n The same comment says \"(This can happen for shared record types,\n for instance.)\" Does the \"for instance\" mean there are now, or just\n in future might be, other cases where the typcache hands you\n a non-refcounted TupleDesc?\n\n5. When a constructed TupleDesc is blessed, the copy placed in the cache\n by assign_record_type_typmod is born with a refcount of 1. Assuming every\n later user of that TupleDesc plays nicely with balanced pin and release,\n what event(s), if any, could ever occur to decrease that initial 1 to 0?\n\nThanks for any help getting these points straight!\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 14 Dec 2021 15:42:12 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Life cycles of tuple descriptors" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> There are some things about the life cycle of the common TupleDesc\n> that I'm not 100% sure about.\n\n> 1. In general, if you get one from relcache or typcache, it is\n> reference-counted, right?\n\nTupdescs for named composite types should be, since those are\npotentially modifiable by DDL. The refcount allows a stale tupdesc\nto go away when it's no longer referenced anywhere.\n\nTupdescs for RECORD types are a different story: there's no way to\nchange them once created, so they'll live as long as the process\ndoes (at least for tupdescs kept in the typcache). Refcounting\nisn't terribly necessary in that case; and at least for the shared\ntupdesc case, we don't do it, to avoid questions of modifying a\npiece of shared state.\n\n> 3. Is that shared case the only way you could see a non-refcounted\n> TupleDesc handed to you by the typcache?\n\nI'm not sure what happens for a non-shared RECORD tupdesc, but it\nprobably wouldn't be wise to assume anything either way.\n\n> 5. When a constructed TupleDesc is blessed, the copy placed in the cache\n> by assign_record_type_typmod is born with a refcount of 1. Assuming every\n> later user of that TupleDesc plays nicely with balanced pin and release,\n> what event(s), if any, could ever occur to decrease that initial 1 to 0?\n\nThat refcount is describing the cache's own reference, so as long as\nthat reference remains it'd be incorrect to decrease the refcount to 0.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 18:03:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "On 12/14/21 18:03, Tom Lane wrote:\n\n> Tupdescs for RECORD types are a different story: ... Refcounting\n> isn't terribly necessary in that case; and at least for the shared\n> tupdesc case, we don't do it, to avoid questions of modifying a\n> piece of shared state.\n\nOk, that's kind of what I was getting at here:\n\n>> 5. When a constructed TupleDesc is blessed, the copy placed in the cache\n>> by assign_record_type_typmod is born with a refcount of 1. ...\n>> what event(s), if any, could ever occur to decrease that initial 1 ...\n>\n> That refcount is describing the cache's own reference, so as long as\n> that reference remains it'd be incorrect to decrease the refcount to 0.\n\nIn the case of a blessed RECORD+typmod tupdesc, *is* there any event that\ncould ever cause the cache to drop that reference? Or is the tupdesc just\ngoing to live there for the life of the backend, its refcount sometimes\ngoing above and back down to but never below 1?\n\nThat would fit with your \"refcounting isn't terribly necessary in that\ncase\". If that's how it works, it's interesting having the two different\npatterns: if it's a shared one, it has refcount -1 and you never fuss\nwith it and it never goes away; if it's a local one it has a non-negative\nrefcount and you go through all the motions and it never goes away anyway.\n\n>> 3. Is that shared case the only way you could see a non-refcounted\n>> TupleDesc handed to you by the typcache?\n> \n> I'm not sure what happens for a non-shared RECORD tupdesc, but it\n> probably wouldn't be wise to assume anything either way.\n\nIf I'm reading this right, for the non-shared case, the copy that goes\ninto the cache is made refcounted. (The copy you presented for blessing\ngets the assigned typmod written in it, and no change to its refcount\nfield.)\n\nThere's really just one thing I'm interested in assuming:\n\n*In general*, if I encounter a tupdesc with -1 refcount, I had better not\nassume much about its longevity. It might be in a context that's about to\ngo away. If I'll be wanting it later, I had better defensively copy it\ninto some context I've chosen.\n\n(Ok, I guess if it's a tupdesc provided to me in a function call, I can\nassume it is good for the duration of the call, or if it's part of an\nSPI result, I can assume it's good until SPI_finish.)\n\nBut if I have gone straight to the typcache to ask for a RECORD tupdesc,\nand the one it gives me has -1 refcount, is it reasonable to assume\nI can retain a reference to that without the defensive copy?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 14 Dec 2021 18:35:48 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> But if I have gone straight to the typcache to ask for a RECORD tupdesc,\n> and the one it gives me has -1 refcount, is it reasonable to assume\n> I can retain a reference to that without the defensive copy?\n\nThe API contract for lookup_rowtype_tupdesc specifies that you must \"call\nReleaseTupleDesc or DecrTupleDescRefCount when done using the tupdesc\".\nIt's safe to assume that the tupdesc will stick around as long as you\nhaven't done that.\n\nAPIs that don't mention a refcount are handing you a tupdesc of uncertain\nlifespan (no more than the current query, likely), so if you want the\ntupdesc to last a long time you'd better copy it into storage you control.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 20:02:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "On 12/14/21 20:02, Tom Lane wrote:\n> The API contract for lookup_rowtype_tupdesc specifies that you must \"call\n> ReleaseTupleDesc or DecrTupleDescRefCount when done using the tupdesc\".\n> It's safe to assume that the tupdesc will stick around as long as you\n> haven't done that.\n\nI think what threw me was having a function whose API contract mentions\nreference counts, but that sometimes gives me things that don't have them.\n\nBut I guess, making the between-the-lines of the contract explicit,\nif lookup_rowtype_tupdesc is contracted to give me a tupdesc that sticks\naround for as long as I haven't called ReleaseTupleDesc, and it sometimes\nelects to give me one for which ReleaseTupleDesc is a no-op, the contract\nis still that the thing sticks around for (at least) as long as I haven't\ndone that.\n\nCool. :)\n\nOh, hmm, maybe one thing in that API comment ought to be changed. It says\nI must call ReleaseTupleDesc *or* DecrTupleDescRefCount. Maybe that dates\nfrom before the shared registry? ReleaseTupleDesc is safe, but anybody who\nuses DecrTupleDescRefCount on a lookup_rowtype_tupdesc result could be\nin for an assertion failure if a non-refcounted tupdesc is returned.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 14 Dec 2021 20:58:32 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 12/14/21 20:02, Tom Lane wrote:\n>> The API contract for lookup_rowtype_tupdesc specifies that you must \"call\n>> ReleaseTupleDesc or DecrTupleDescRefCount when done using the tupdesc\".\n>> It's safe to assume that the tupdesc will stick around as long as you\n>> haven't done that.\n\n> I think what threw me was having a function whose API contract mentions\n> reference counts, but that sometimes gives me things that don't have them.\n\nThat's supposed to be hidden under ReleaseTupleDesc; you shouldn't have to\nthink about it.\n\n> Oh, hmm, maybe one thing in that API comment ought to be changed. It says\n> I must call ReleaseTupleDesc *or* DecrTupleDescRefCount. Maybe that dates\n> from before the shared registry? ReleaseTupleDesc is safe, but anybody who\n> uses DecrTupleDescRefCount on a lookup_rowtype_tupdesc result could be\n> in for an assertion failure if a non-refcounted tupdesc is returned.\n\nYeah, I was just wondering the same. I think DecrTupleDescRefCount\nis safe if you know you are looking up a named composite type, but\nmaybe that's still too much familiarity with typcache innards.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Dec 2021 21:14:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "I wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> Oh, hmm, maybe one thing in that API comment ought to be changed. It says\n>> I must call ReleaseTupleDesc *or* DecrTupleDescRefCount. Maybe that dates\n>> from before the shared registry? ReleaseTupleDesc is safe, but anybody who\n>> uses DecrTupleDescRefCount on a lookup_rowtype_tupdesc result could be\n>> in for an assertion failure if a non-refcounted tupdesc is returned.\n\n> Yeah, I was just wondering the same. I think DecrTupleDescRefCount\n> is safe if you know you are looking up a named composite type, but\n> maybe that's still too much familiarity with typcache innards.\n\nHere's a draft patch for this. There are several places that are\ndirectly using DecrTupleDescRefCount after lookup_rowtype_tupdesc\nor equivalent, which'd now be forbidden. I think they are all safe\ngiven the assumption that the typcache's tupdescs for named composites\nare refcounted. (The calls in expandedrecord.c could be working\nwith RECORD, but those code paths just checked that the tupdesc\nis refcounted.) So there's no actual bug here, and no reason to\nback-patch, but this seems like a good idea to decouple callers\na bit more from typcache's internal logic. None of these call\nsites are so performance-critical that one extra test will hurt.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 Dec 2021 17:50:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "On Thu, Dec 16, 2021 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a draft patch for this. There are several places that are\n> directly using DecrTupleDescRefCount after lookup_rowtype_tupdesc\n> or equivalent, which'd now be forbidden. I think they are all safe\n> given the assumption that the typcache's tupdescs for named composites\n> are refcounted. (The calls in expandedrecord.c could be working\n> with RECORD, but those code paths just checked that the tupdesc\n> is refcounted.) So there's no actual bug here, and no reason to\n> back-patch, but this seems like a good idea to decouple callers\n> a bit more from typcache's internal logic. None of these call\n> sites are so performance-critical that one extra test will hurt.\n\nLGTM.\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:18:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "On 12/15/21 17:50, Tom Lane wrote:\n\n> Here's a draft patch for this. There are several places that are\n> directly using DecrTupleDescRefCount after lookup_rowtype_tupdesc\n> or equivalent, which'd now be forbidden. I think they are all safe\n> given the assumption that the typcache's tupdescs for named composites\n> are refcounted. (The calls in expandedrecord.c could be working\n> with RECORD, but those code paths just checked that the tupdesc\n> is refcounted.) So there's no actual bug here, and no reason to\n> back-patch, but this seems like a good idea to decouple callers\n> a bit more from typcache's internal logic.\n\nI agree with the analysis at each of those sites, and the new comment\nclears up everything that had puzzled me before.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 15 Dec 2021 18:20:11 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Life cycles of tuple descriptors" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Dec 16, 2021 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a draft patch for this.\n\n> LGTM.\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Dec 2021 18:58:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Life cycles of tuple descriptors" } ]
[ { "msg_contents": "Hi!\n\nBack when we were working on zheap, we realized that we needed some\nway of storing undo records that would permit us to discard old undo\nefficiently when it was no longer needed. Thomas Munro dubbed this\n\"conveyor belt storage,\" the idea being that items are added at one\nend and removed from the other. In the zheap patches, Thomas took an\napproach similar to what we've done elsewhere for CLOG and WAL: keep\ncreating new files, put a relatively small amount of data in each one,\nand remove the old files in their entirety when you can prove that\nthey are no longer needed. While that did and does seem reasonable, I\ncame to dislike it, because it meant we needed a separate smgr for\nundo as compared with everything else, which was kind of complicated.\nAlso, that approach was tightly integrated with and thus only useful\nfor zheap, and as Thomas observed at the time, the problem seems to be\nfairly general. I got interested in this problem again because of the\nidea discussed in\nhttps://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\nof having a \"dead TID\" relation fork in which to accumulate TIDs that\nhave been marked as dead in the table but not yet removed from the\nindexes, so as to permit a looser coupling between table vacuum and\nindex vacuum. That's yet another case where you accumulate new data\nand then at a certain point the oldest data can be thrown away because\nits intended purpose has been served.\n\nSo here's a patch. Basically, it lets you initialize a relation fork\nas a \"conveyor belt,\" and then you can add pages of basically\narbitrary data to the conveyor belt and then throw away old ones and,\nmodulo bugs, it will take care of recycling space for you. There's a\nfairly detailed README in the patch if you want a more detailed\ndescription of how the whole thing works. It's missing some features\nthat I want it to have: for example, I'd like to have on-line\ncompaction, where whatever logical page numbers of data currently\nexist can be relocated to lower physical page numbers thus allowing\nyou to return space to the operating system, hopefully without\nrequiring a strong heavyweight lock. But that's not implemented yet,\nand it's also missing a few other things, like test cases, performance\nresults, more thorough debugging, better write-ahead logging\nintegration, and some code to use it to do something useful. But\nthere's enough here, I think, for you to form an opinion about whether\nyou think this is a reasonable direction, and give any design-level\nfeedback that you'd like to give. My colleagues Dilip Kumar and Mark\nDilger have contributed to this effort with some testing help, but all\nthe code in this patch is mine.\n\nWhen I was chatting with Andres about this, he jumped to the question\nof whether this could be used to replace SLRUs. To be honest, it's not\nreally designed for applications that are quite that intense. I think\nwe would get too much contention on the metapage, which you have to\nlock and often modify for just about every conveyor belt operation.\nPerhaps that problem can be dodged somehow, and it might even be a\ngood idea, because (1) then we'd have that data in shared_buffers\ninstead of a separate tiny buffer space and (2) the SLRU code is\npretty crappy. But I'm more interested in using this for new things\nthan I am in replacing existing core technology where any new bugs\nwill break everything for everyone. Still, I'm happy to hear ideas\naround this kind of thing, or to hear the results of any\nexperimentation you may want to do.\n\nLet me know what you think.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Dec 2021 18:00:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "generalized conveyor belt storage" }, { "msg_contents": "On Tue, Dec 14, 2021 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I got interested in this problem again because of the\n> idea discussed in\n> https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\n> of having a \"dead TID\" relation fork in which to accumulate TIDs that\n> have been marked as dead in the table but not yet removed from the\n> indexes, so as to permit a looser coupling between table vacuum and\n> index vacuum. That's yet another case where you accumulate new data\n> and then at a certain point the oldest data can be thrown away because\n> its intended purpose has been served.\n\nThanks for working on this! It seems very strategically important to me.\n\n> So here's a patch. Basically, it lets you initialize a relation fork\n> as a \"conveyor belt,\" and then you can add pages of basically\n> arbitrary data to the conveyor belt and then throw away old ones and,\n> modulo bugs, it will take care of recycling space for you. There's a\n> fairly detailed README in the patch if you want a more detailed\n> description of how the whole thing works.\n\nHow did you test this? I ask because it would be nice if there was a\nconvenient way to try this out, as somebody with a general interest.\nEven just a minimal test module, that you used for development work.\n\n> When I was chatting with Andres about this, he jumped to the question\n> of whether this could be used to replace SLRUs. To be honest, it's not\n> really designed for applications that are quite that intense.\n\nI personally think that SLRUs (and the related problem of hint bits)\nare best addressed by tracking which transactions have modified what\nheap blocks (perhaps only approximately), and then eagerly cleaning up\naborted transaction IDs, using a specialized version of VACUUM that\ndoes something like heap pruning for aborted xacts. It just seems\nweird that we keep around clog in order to not have to run VACUUM too\nfrequently, which (among other things) already cleans up after aborted\ntransactions. Having a more efficient data structure for commit status\ninformation doesn't seem all that promising, because the problem is\nactually our insistence on remembering which XIDs aborted almost\nindefinitely. There is no fundamental reason why it has to work that\nway. I don't mean that it could in principle be changed (that's almost\nalways true); I mean that it seems like an accident of history, that\ncould have easily gone another way: the ancestral design of clog\nexisting in a world without MVCC. It seems like a totally vestigial\nthing to me, which I wouldn't say about a lot of other things (just\nclog and freezing).\n\nSomething like the conveyor belt seems like it would help with this\nother kind of VACUUM, mind you. We probably don't want to do anything\nlike index vacuuming for these aborted transactions (actually maybe we\nwant to do retail deletion, which is much more likely to work out\nthere). But putting the TIDs into a store used for dead TIDs could\nmake sense. Especially in extreme cases.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 14 Dec 2021 17:02:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Dec 15, 2021 at 6:33 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n\n> How did you test this? I ask because it would be nice if there was a\n> convenient way to try this out, as somebody with a general interest.\n> Even just a minimal test module, that you used for development work.\n>\n\nI have tested this using a simple extension over the conveyor belt\nAPIs, this extension provides wrapper apis over the conveyor belt\nAPIs. These are basic APIs which can be extended even further for\nmore detailed testing, e.g. as of now I have provided an api to read\ncomplete page from the conveyor belt but that can be easily extended\nto read from a particular offset and also the amount of data to read.\n\nParallelly I am also testing this by integrating it with the vacuum,\nwhich is still a completely WIP patch and needs a lot of design level\nimprovement so not sharing it, so once it is in better shape I will\npost that in the separate thread.\n\nBasically, we have decoupled different vacuum phases (only for manual\nvacuum ) something like below,\nVACUUM (first_pass) t;\nVACUUM idx;\nVACUUM (second_pass) t;\n\nSo in the first pass we are just doing the first pass of vacuum and\nwherever we are calling lazy_vacuum() we are storing those dead tids\nin the conveyor belt. In the index pass, user can vacuum independent\nindex and therein it will just fetch the last conveyor belt point upto\nwhich it has already vacuum, then from there load dead tids which can\nfit in maintenance_work_mem and then call the index bulk delete (this\nwill be done in loop until we complete the index vacuum). In the\nsecond pass, we check all the indexes and find the minimum conveyor\nbelt point upto which all indexes have vacuumed. We also fetch the\nlast point where we left the second pass of the heap. Now we fetch\nthe dead tids from the conveyor belt (which fits in\nmaintenance_work_mem) from the last vacuum point of heap upto the min\nindex vacuum point. And perform the second heap pass.\n\nI have given the highlights of the decoupling work just to show what\nsort of testing we are doing for the conveyor belt. But we can\ndiscuss this on a separate thread when I am ready to post that patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Dec 2021 11:17:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, 15 Dec 2021 at 00:01, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi!\n[...]\n> So here's a patch. Basically, it lets you initialize a relation fork\n> as a \"conveyor belt,\" and then you can add pages of basically\n> arbitrary data to the conveyor belt and then throw away old ones and,\n> modulo bugs, it will take care of recycling space for you. There's a\n> fairly detailed README in the patch if you want a more detailed\n> description of how the whole thing works.\n\nI was reading through this README when I hit the following:\n\n> +Conceptually, a relation fork organized as a conveyor belt has three parts:\n> +\n> +- Payload. The payload is whatever data the user of this module wishes\n> + to store. The conveyor belt doesn't care what you store in a payload page,\n> + but it does require that you store something: each time a payload page is\n> + initialized, it must end up with either pd_lower > SizeOfPageHeaderData,\n> + or pd_lower < BLCKSZ.\n\nAs SizeOfPageHeaderData < BLCKSZ, isn't this condition always true? Or\nat least, this currently allows for either any value of pd_lower, or\nthe (much clearer) 'pd_lower <= SizeOfPageHeaderData or pd_lower >=\nBLCKSZ', depending on exclusiveness of the either_or clause.\n\n> It's missing some features\n> that I want it to have: for example, I'd like to have on-line\n> compaction, where whatever logical page numbers of data currently\n> exist can be relocated to lower physical page numbers thus allowing\n> you to return space to the operating system, hopefully without\n> requiring a strong heavyweight lock. But that's not implemented yet,\n> and it's also missing a few other things, like test cases, performance\n> results, more thorough debugging, better write-ahead logging\n> integration, and some code to use it to do something useful. But\n> there's enough here, I think, for you to form an opinion about whether\n> you think this is a reasonable direction, and give any design-level\n> feedback that you'd like to give. My colleagues Dilip Kumar and Mark\n> Dilger have contributed to this effort with some testing help, but all\n> the code in this patch is mine.\n\nYou mentioned that this is meant to be used as a \"relation fork\", but\nI couldn't find new code in relpath.h (where ForkNumber etc. are\ndefined) that allows one more fork per relation. Is that too on the\nmissing features list, or did I misunderstand what you meant with\n\"relation fork\"?\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:03:05 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Dec 15, 2021 at 10:03 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > +Conceptually, a relation fork organized as a conveyor belt has three parts:\n> > +\n> > +- Payload. The payload is whatever data the user of this module wishes\n> > + to store. The conveyor belt doesn't care what you store in a payload page,\n> > + but it does require that you store something: each time a payload page is\n> > + initialized, it must end up with either pd_lower > SizeOfPageHeaderData,\n> > + or pd_lower < BLCKSZ.\n>\n> As SizeOfPageHeaderData < BLCKSZ, isn't this condition always true? Or\n> at least, this currently allows for either any value of pd_lower, or\n> the (much clearer) 'pd_lower <= SizeOfPageHeaderData or pd_lower >=\n> BLCKSZ', depending on exclusiveness of the either_or clause.\n\nThe second part of the condition should say pd_upper < BLCKSZ, not\npd_lower. Woops. It's intended to be false for uninitialized pages and\nalso for pages that are initialized but completely empty with no line\npointers and no special space.\n\n> You mentioned that this is meant to be used as a \"relation fork\", but\n> I couldn't find new code in relpath.h (where ForkNumber etc. are\n> defined) that allows one more fork per relation. Is that too on the\n> missing features list, or did I misunderstand what you meant with\n> \"relation fork\"?\n\nIt's up to whoever is using the code to provide the relation fork -\nand it could be the main fork of some new kind of relation, or it\ncould use some other existing fork number, or it could be an entirely\nnew fork. But it would be up to the calling code to figure that stuff\nout.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Dec 2021 10:34:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Dec 15, 2021 at 9:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Dec 15, 2021 at 10:03 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> [...]\n\nThought patch is WIP, here are a few comments that I found while\nreading the patch and thought might help:\n\n+ {\n+ if (meta->cbm_oldest_index_segment ==\n+ meta->cbm_newest_index_segment)\n+ elog(ERROR, \"must remove last index segment when only one remains\");\n+ meta->cbm_oldest_index_segment = segno;\n+ }\n\nHow about having to assert or elog to ensure that 'segno' is indeed\nthe successor?\n--\n\n+ if (meta->cbm_index[offset] != offset)\n+ elog(ERROR,\n+ \"index entry at offset %u was expected to be %u but found %u\",\n+ offset, segno, meta->cbm_index[offset]);\n\nIF condition should be : meta->cbm_index[offset] != segno ?\n--\n\n+ if (segno >= CB_METAPAGE_FREESPACE_BYTES * BITS_PER_BYTE)\n+ elog(ERROR, \"segment %u out of range for metapage fsm\", segno);\n\nI think CB_FSM_SEGMENTS_FOR_METAPAGE should be used like\ncb_metapage_set_fsm_bit()\n--\n\n+/*\n+ * Increment the count of segments allocated.\n+ */\n+void\n+cb_metapage_increment_next_segment(CBMetapageData *meta, CBSegNo segno)\n+{\n+ if (segno != meta->cbm_next_segment)\n+ elog(ERROR, \"extending to create segment %u but next segment is %u\",\n+ segno, meta->cbm_next_segment);\n\nI didn't understand this error, what does it mean? It would be\nhelpful to add a brief about what it means and why we are throwing it\nand/or rephrasing the error bit.\n--\n\n+++ b/src/backend/access/conveyor/cbxlog.c\n@@ -0,0 +1,442 @@\n+/*-------------------------------------------------------------------------\n+ *\n+ * cbxlog.c\n+ * XLOG support for conveyor belts.\n+ *\n+ * For each REDO function in this file, see cbmodify.c for the\n+ * corresponding function that performs the modification during normal\n+ * running and logs the record that we REDO here.\n+ *\n+ * Copyright (c) 2016-2021, PostgreSQL Global Development Group\n+ *\n+ * src/backend/access/conveyor/cbmodify.c\n\nIncorrect file name: s/cbmodify.c/cbxlog.c/\n--\n\n+ can_allocate_segment =\n+ (free_segno_first_blkno < possibly_not_on_disk_blkno)\n\nThe condition should be '<=' ?\n--\n\n+ * ConveyorBeltPhysicalTruncate. For more aggressive cleanup options, see\n+ * ConveyorBeltCompact or ConveyorBeltRewrite.\n\nDidn't find ConveyorBeltCompact or ConveyorBeltRewrite code, might yet\nto be implemented?\n--\n\n+ * the value computed here here if the entry at that offset is already\n\n\"here\" twice.\n--\n\nAnd few typos:\n\nothr\nsemgents\nfucntion\nrefrenced\ninitialied\nremve\nextrordinarily\nimplemenation\n--\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 29 Dec 2021 17:37:48 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Dec 29, 2021 at 7:08 AM Amul Sul <sulamul@gmail.com> wrote:\n> Thought patch is WIP, here are a few comments that I found while\n> reading the patch and thought might help:\n>\n> + {\n> + if (meta->cbm_oldest_index_segment ==\n> + meta->cbm_newest_index_segment)\n> + elog(ERROR, \"must remove last index segment when only one remains\");\n> + meta->cbm_oldest_index_segment = segno;\n> + }\n>\n> How about having to assert or elog to ensure that 'segno' is indeed\n> the successor?\n\nThis code doesn't have any inexpensive way to know whether segno is\nthe successor. To figure that out you'd need to look at the latest\nindex page that does exist, but this function's job is just to look at\nthe metapage. Besides, the eventual caller will have just looked up\nthat value in order to pass it to this function, so double-checking\nwhat we just computed right before doesn't really make sense.\n\n> + if (meta->cbm_index[offset] != offset)\n> + elog(ERROR,\n> + \"index entry at offset %u was expected to be %u but found %u\",\n> + offset, segno, meta->cbm_index[offset]);\n>\n> IF condition should be : meta->cbm_index[offset] != segno ?\n\nOops, you're right.\n\n> + if (segno >= CB_METAPAGE_FREESPACE_BYTES * BITS_PER_BYTE)\n> + elog(ERROR, \"segment %u out of range for metapage fsm\", segno);\n>\n> I think CB_FSM_SEGMENTS_FOR_METAPAGE should be used like\n> cb_metapage_set_fsm_bit()\n\nGood call.\n\n> +/*\n> + * Increment the count of segments allocated.\n> + */\n> +void\n> +cb_metapage_increment_next_segment(CBMetapageData *meta, CBSegNo segno)\n> +{\n> + if (segno != meta->cbm_next_segment)\n> + elog(ERROR, \"extending to create segment %u but next segment is %u\",\n> + segno, meta->cbm_next_segment);\n>\n> I didn't understand this error, what does it mean? It would be\n> helpful to add a brief about what it means and why we are throwing it\n> and/or rephrasing the error bit.\n\ncbm_next_segment is supposed to be the lowest-numbered segment that\ndoesn't yet exist. Imagine that it's 4. But, suppose that the free\nspace map shows segment 4 as in use, even though the metapage's\ncbm_next_segment value claims it's not allocated yet. Then maybe we\ndecide that the lowest-numbered free segment according to the\nfreespace map is 5, while meanwhile the metapage is pretty sure we've\nnever created 4. Then I think we'll end up here and trip over this\nerror check. To get more understanding of this, look at how\nConveyorBeltGetNewPage selects free_segno.\n\n> Incorrect file name: s/cbmodify.c/cbxlog.c/\n\nRight, thanks.\n\n> + can_allocate_segment =\n> + (free_segno_first_blkno < possibly_not_on_disk_blkno)\n>\n> The condition should be '<=' ?\n\nIt doesn't look that way to me. If they were equal, then that block\ndoesn't necessarily exist on disk ... in which case we should not\nallocate. Am I missing something?\n\n> + * ConveyorBeltPhysicalTruncate. For more aggressive cleanup options, see\n> + * ConveyorBeltCompact or ConveyorBeltRewrite.\n>\n> Didn't find ConveyorBeltCompact or ConveyorBeltRewrite code, might yet\n> to be implemented?\n\nRight.\n\n> + * the value computed here here if the entry at that offset is already\n>\n> \"here\" twice.\n\nOops.\n\n> And few typos:\n>\n> othr\n> semgents\n> fucntion\n> refrenced\n> initialied\n> remve\n> extrordinarily\n> implemenation\n\nWow, OK, that's a lot of typos.\n\nUpdated patch attached, also with a fix for the mistake in the readme\nthat Matthias found, and a few other bug fixes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Jan 2022 14:42:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Jan 5, 2022 at 1:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>\n> Updated patch attached, also with a fix for the mistake in the readme\n> that Matthias found, and a few other bug fixes.\n\nRebased over the current master. Basically, I rebased for other patch\nsets I am planning to post for heap and index vacuum decoupling.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 Jan 2022 17:59:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": " On Wed, Jan 5, 2022 at 1:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Dec 29, 2021 at 7:08 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Thought patch is WIP, here are a few comments that I found while\n> > reading the patch and thought might help:\n> >\n> > + {\n> > + if (meta->cbm_oldest_index_segment ==\n> > + meta->cbm_newest_index_segment)\n> > + elog(ERROR, \"must remove last index segment when only one remains\");\n> > + meta->cbm_oldest_index_segment = segno;\n> > + }\n> >\n> > How about having to assert or elog to ensure that 'segno' is indeed\n> > the successor?\n>\n> This code doesn't have any inexpensive way to know whether segno is\n> the successor. To figure that out you'd need to look at the latest\n> index page that does exist, but this function's job is just to look at\n> the metapage. Besides, the eventual caller will have just looked up\n> that value in order to pass it to this function, so double-checking\n> what we just computed right before doesn't really make sense.\n>\n\nOk.\n\nAssert(meta->cbm_oldest_index_segment < segno) was in my mind since\nthe segment to be removed is between meta->cbm_oldest_index_segment\nand segno, IIUC.\n\n> [...]\n> Updated patch attached, also with a fix for the mistake in the readme\n> that Matthias found, and a few other bug fixes.\n>\n\nFew more comments for this version:\n\n+void\n+cb_cache_invalidate(CBCache *cache, CBPageNo index_start,\n+ uint64 index_segments_moved)\n+{\n+ if (index_segments_moved != cache->index_segments_moved)\n+ {\n+ cb_iseg_reset(cache->iseg);\n+ cache->index_segments_moved = index_segments_moved;\n+ }\n+ else if (index_start > cache->oldest_possible_start)\n+ {\n+ cb_iseg_iterator it;\n+ cb_iseg_entry *entry;\n+\n+ cb_iseg_start_iterate(cache->iseg, &it);\n+ while ((entry = cb_iseg_iterate(cache->iseg, &it)) != NULL)\n+ if (entry->index_segment_start < index_start)\n+ cb_iseg_delete_item(cache->iseg, entry);\n+ }\n+}\n\nShouldn't update oldest_possible_start, once all the entries preceding\nthe index_start are deleted?\n--\n\n+CBSegNo\n+cbfsmpage_find_free_segment(Page page)\n+{\n+ CBFSMPageData *fsmp = cb_fsmpage_get_special(page);\n+ unsigned i;\n+ unsigned j;\n+\n+ StaticAssertStmt(CB_FSMPAGE_FREESPACE_BYTES % sizeof(uint64) == 0,\n+ \"CB_FSMPAGE_FREESPACE_BYTES should be a multiple of 8\");\n+\n\nI am a bit confused about this assertion, why is that so?\nWhy should CB_FSMPAGE_FREESPACE_BYTES be multiple of 8?\nDo you mean CB_FSM_SEGMENTS_PER_FSMPAGE instead of CB_FSMPAGE_FREESPACE_BYTES?\n--\n\n+/*\n+ * Add index entries for logical pages beginning at 'pageno'.\n+ *\n+ * It is the caller's responsibility to supply the correct index\npage, and\n+ * to make sure that there is enough room for the entries to be added.\n+ */\n+void\n+cb_indexpage_add_index_entries(Page page,\n+ unsigned pageoffset,\n+ unsigned num_index_entries,\n+ CBSegNo *index_entries)\n\nThe first comment line says \"...logical pages beginning at 'pageno'\",\nthere is nothing 'pageno' in that function, does it mean pageoffset?\n--\n\n+ * Sets *pageno to the first logical page covered by this index page.\n+ *\n+ * Returns the segment number to which the obsolete index entry points.\n+ */\n+CBSegNo\n+cb_indexpage_get_obsolete_entry(Page page, unsigned *pageoffset,\n+ CBPageNo *first_pageno)\n+{\n+ CBIndexPageData *ipd = cb_indexpage_get_special(page);\n+\n+ *first_pageno = ipd->cbidx_first_page;\n+\n+ while (*pageoffset < CB_INDEXPAGE_INDEX_ENTRIES &&\n+ ipd->cbidx_entry[*pageoffset] != CB_INVALID_SEGMENT)\n+ ++*pageoffset;\n+\n+ return ipd->cbidx_entry[*pageoffset];\n+}\n\nHere I think *first_pageno should be instead of *pageno in the comment\nline. The second line says \"Returns the segment number to which the\nobsolete index entry points.\" I am not sure if that is correct or I\nmight have misunderstood this? Look like function returns\nCB_INVALID_SEGMENT or ipd->cbidx_entry[CB_INDEXPAGE_INDEX_ENTRIES]\nvalue which could be a garbage value due to out-of-bound memory\naccess.\n--\n\n+/*\n+ * Copy the indicated number of index entries out of the metapage.\n+ */\n+void\n+cb_metapage_get_index_entries(CBMetapageData *meta, unsigned num_index_entries,\n+ CBSegNo *index_entries)\n+{\n+ Assert(num_index_entries <= cb_metapage_get_index_entries_used(meta));\n\nIMO, having elog instead of assert would be good, like\ncb_metapage_remove_index_entries() has an elog for (used < count).\n--\n\n+ * Copyright (c) 2016-2021, PostgreSQL Global Development Group\n\nThis rather is a question for my knowledge, why this copyright year\nstarting from 2016, I thought we would add the current year only for the\nnew file to be added.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 27 Jan 2022 19:15:34 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "What's with the free text in cbstorage.h? I would guess that this\nwouldn't even compile, and nobody has noticed because the file is not\nincluded by anything yet ...\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Wed, 20 Apr 2022 14:01:54 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, 20 Apr 2022 at 13:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> What's with the free text in cbstorage.h? I would guess that this\n> wouldn't even compile, and nobody has noticed because the file is not\n> included by anything yet ...\n\nI'm not able to compile:\n\ncbfsmpage.c: In function ‘cb_fsmpage_initialize’:\ncbfsmpage.c:34:11: warning: unused variable ‘fsm_block_spacing’\n[-Wunused-variable]\n unsigned fsm_block_spacing = cb_fsm_block_spacing(pages_per_segment);\n ^\ncbfsmpage.c:33:14: warning: unused variable ‘first_fsm_block’\n[-Wunused-variable]\n BlockNumber first_fsm_block = cb_first_fsm_block(pages_per_segment);\n...\ncbxlog.c: In function ‘cb_xlog_allocate_payload_segment’:\ncbxlog.c:70:24: error: void value not ignored as it ought to be\n bool have_fsm_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);\n ^\ncbxlog.c: In function ‘cb_xlog_allocate_index_segment’:\ncbxlog.c:123:17: error: void value not ignored as it ought to be\n have_prev_page = XLogRecGetBlockTag(record, 2, NULL, NULL, NULL);\n ^\ncbxlog.c:124:16: error: void value not ignored as it ought to be\n have_fsm_page = XLogRecGetBlockTag(record, 3, NULL, NULL, NULL);\n ^\ncbxlog.c: In function ‘cb_xlog_recycle_payload_segment’:\ncbxlog.c:311:16: error: void value not ignored as it ought to be\n have_metapage = XLogRecGetBlockTag(record, 0, NULL, NULL, NULL);\n ^\ncbxlog.c:312:18: error: void value not ignored as it ought to be\n have_index_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);\n ^\ncbxlog.c:313:16: error: void value not ignored as it ought to be\n have_fsm_page = XLogRecGetBlockTag(record, 2, NULL, NULL, NULL);\n ^\nmake[4]: *** [cbxlog.o] Error 1\nmake[4]: Leaving directory\n`/home/thom/Development/postgresql/src/backend/access/conveyor'\nmake[3]: *** [conveyor-recursive] Error 2\nmake[3]: Leaving directory\n`/home/thom/Development/postgresql/src/backend/access'\nmake[2]: *** [access-recursive] Error 2\n\n\n-- \nThom\n\n\n", "msg_date": "Wed, 20 Apr 2022 13:32:01 +0100", "msg_from": "Thom Brown <thom@linux.com>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Apr 20, 2022 at 8:02 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> What's with the free text in cbstorage.h? I would guess that this\n> wouldn't even compile, and nobody has noticed because the file is not\n> included by anything yet ...\n\nI think I was using that file for random notes with the idea of\nremoving it eventually.\n\nI'll clean that up if I get back around to working on this. Right now\nit's not clear to me how to get this integrated with vacuum in a\nuseful way, so finishing this part of it isn't that exciting, at least\nnot unless somebody else comes up with a cool use for it.\n\nWhat motivated you to look at this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Apr 2022 10:06:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On Wed, Apr 20, 2022 at 8:32 AM Thom Brown <thom@linux.com> wrote:\n> On Wed, 20 Apr 2022 at 13:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > What's with the free text in cbstorage.h? I would guess that this\n> > wouldn't even compile, and nobody has noticed because the file is not\n> > included by anything yet ...\n>\n> I'm not able to compile:\n\nSince I haven't been working on this for a while, the patch set's not updated.\n\n> cbfsmpage.c: In function ‘cb_fsmpage_initialize’:\n> cbfsmpage.c:34:11: warning: unused variable ‘fsm_block_spacing’\n> [-Wunused-variable]\n> unsigned fsm_block_spacing = cb_fsm_block_spacing(pages_per_segment);\n> ^\n\nHmm, well I guess if that variable is unused we can just delete it.\n\n> cbxlog.c: In function ‘cb_xlog_allocate_payload_segment’:\n> cbxlog.c:70:24: error: void value not ignored as it ought to be\n> bool have_fsm_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);\n\nThis is because Tom recently made that function return void instead of\nbool. I guess all these will need to be changed to use\nXLogRecGetBlockTagExtended and pass an extra NULL for the\n*prefetch_buffer argument.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Apr 2022 10:08:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generalized conveyor belt storage" }, { "msg_contents": "On 2022-Apr-20, Robert Haas wrote:\n\n> I'll clean that up if I get back around to working on this. Right now\n> it's not clear to me how to get this integrated with vacuum in a\n> useful way, so finishing this part of it isn't that exciting, at least\n> not unless somebody else comes up with a cool use for it.\n\nOh, okay.\n\n> What motivated you to look at this?\n\nShrug ... it happened to fall in my mutt index view and I wanted to have\na general idea of where it was going. I have no immediate use for it,\nsorry.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I dream about dreams about dreams\", sang the nightingale\nunder the pale moon (Sandman)\n\n\n", "msg_date": "Wed, 20 Apr 2022 18:07:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generalized conveyor belt storage" } ]
[ { "msg_contents": "Hi all,\n\nI've received numerous complaints about CREATE SCHEMA IF NOT EXISTS failing\nwhen the user lacks CREATE privileges on the database - even if the schema\nalready exists. A typical scenario would be a multi-tenant\nschema-per-tenant setup, where the schema and tenant user are created\nbeforehand, but then some database layer or ORM wants to ensure that the\nschema is there so the above is issued.\n\nWould it be reasonable to have the above no error if the schema already\nexists? That would make it similar to the following (which I'm switching to\nin the Entity Framework Core ORM):\n\nDO $$\nBEGIN\n IF NOT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'foo') THEN\n CREATE SCHEMA \"foo\";\n END IF;\nEND $$;\n\nThe same could apply to other CREATE ... IF NOT EXISTS variations.\n\nShay\n\nHi all,I've received numerous complaints about CREATE SCHEMA IF NOT EXISTS failing when the user lacks CREATE privileges on the database - even if the schema already exists. A typical scenario would be a multi-tenant schema-per-tenant setup, where the schema and tenant user are created beforehand, but then some database layer or ORM wants to ensure that the schema is there so the above is issued.Would it be reasonable to have the above no error if the schema already exists? That would make it similar to the following (which I'm switching to in the Entity Framework Core ORM):DO $$BEGIN    IF NOT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'foo') THEN        CREATE SCHEMA \"foo\";    END IF;END $$;The same could apply to other CREATE ... IF NOT EXISTS variations.Shay", "msg_date": "Wed, 15 Dec 2021 13:35:24 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Privilege required for IF EXISTS event if the object already exists" }, { "msg_contents": "Shay Rojansky <roji@roji.org> writes:\n> I've received numerous complaints about CREATE SCHEMA IF NOT EXISTS failing\n> when the user lacks CREATE privileges on the database - even if the schema\n> already exists. A typical scenario would be a multi-tenant\n> schema-per-tenant setup, where the schema and tenant user are created\n> beforehand, but then some database layer or ORM wants to ensure that the\n> schema is there so the above is issued.\n\n> Would it be reasonable to have the above no error if the schema already\n> exists?\n\nUmmm ... why? What's the point of issuing such a command from a role\nthat lacks the privileges to actually do the creation? It seems to\nme that you're asking us to design around very-badly-written apps.\n\n> The same could apply to other CREATE ... IF NOT EXISTS variations.\n\nYeah, it would only make sense if we did it across the board.\nFor all of them, though, this seems like it'd just move the needle\neven further in terms of not having certainty about the properties\nof the object. I'll spare you my customary rant about that, and\njust note that not knowing who owns a schema you're using is a\nlarge security hazard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Dec 2021 10:44:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "On Wed, Dec 15, 2021 at 5:35 AM Shay Rojansky <roji@roji.org> wrote:\n\n> Hi all,\n>\n> I've received numerous complaints about CREATE SCHEMA IF NOT EXISTS\n> failing when the user lacks CREATE privileges on the database - even if the\n> schema already exists. A typical scenario would be a multi-tenant\n> schema-per-tenant setup, where the schema and tenant user are created\n> beforehand, but then some database layer or ORM wants to ensure that the\n> schema is there so the above is issued.\n>\n> Would it be reasonable to have the above no error if the schema already\n> exists?\n>\n\nI would say it is reasonable in theory. But I cannot think of an actual\nscenario that would benefit from such a change. Your stated use case is\nrejected since you explicitly do not want tenants to be able to create\nschemas - so the simple act of issuing \"CREATE SCHEMA\" is disallowed.\n\nThat would make it similar to the following (which I'm switching to in the\n> Entity Framework Core ORM):\n>\n> DO $$\n> BEGIN\n> IF NOT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'foo') THEN\n> CREATE SCHEMA \"foo\";\n> END IF;\n> END $$;\n>\n>\nBecause tenants are not allowed to CREATE SCHEMA you should replace \"CREATE\nSCHEMA\" in the body of that DO block with \"RAISE ERROR 'Schema foo required\nbut not present!';\" Or, just tell them to create objects in the presumed\npresent schema and let them see the \"schema not found\" error that would\noccur in rare case the schema didn't exist.\n\nDavid J.\n\nOn Wed, Dec 15, 2021 at 5:35 AM Shay Rojansky <roji@roji.org> wrote:Hi all,I've received numerous complaints about CREATE SCHEMA IF NOT EXISTS failing when the user lacks CREATE privileges on the database - even if the schema already exists. A typical scenario would be a multi-tenant schema-per-tenant setup, where the schema and tenant user are created beforehand, but then some database layer or ORM wants to ensure that the schema is there so the above is issued.Would it be reasonable to have the above no error if the schema already exists?I would say it is reasonable in theory.  But I cannot think of an actual scenario that would benefit from such a change.  Your stated use case is rejected since you explicitly do not want tenants to be able to create schemas - so the simple act of issuing \"CREATE SCHEMA\" is disallowed. That would make it similar to the following (which I'm switching to in the Entity Framework Core ORM):DO $$BEGIN    IF NOT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'foo') THEN        CREATE SCHEMA \"foo\";    END IF;END $$;Because tenants are not allowed to CREATE SCHEMA you should replace \"CREATE SCHEMA\" in the body of that DO block with \"RAISE ERROR 'Schema foo required but not present!';\"  Or, just tell them to create objects in the presumed present schema and let them see the \"schema not found\" error that would occur in rare case the schema didn't exist.David J.", "msg_date": "Wed, 15 Dec 2021 09:10:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "On 12/15/21 11:10, David G. Johnston wrote:\n>> IF NOT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = 'foo') THEN\n\nOrthogonally to any other comments,\n\nIF pg_catalog.to_regnamespace('foo') IS NULL THEN\n\nmight be tidier, if you don't need to support PG < 9.5.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 15 Dec 2021 11:44:56 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "> I would say it is reasonable in theory. But I cannot think of an actual\nscenario that would benefit from such a change. Your stated use case is\nrejected since you explicitly do not want tenants to be able to create\nschemas - so the simple act of issuing \"CREATE SCHEMA\" is disallowed.\n> [...]\n> Because tenants are not allowed to CREATE SCHEMA you should replace\n\"CREATE SCHEMA\" in the body of that DO block with \"RAISE ERROR 'Schema foo\nrequired but not present!';\" Or, just tell them to create objects in the\npresumed present schema and let them see the \"schema not found\" error that\nwould occur in rare case the schema didn't exist.\n\nThe point here is when layers/ORMs are used, and are not necessarily aware\nof the multi-tenant scenario. In my concrete real-world complaints here,\nusers instruct the ORM to generate the database schema for them. Now,\nbefore creating tables, the ORM generates CREATE SCHEMA IF NOT EXISTS, to\nensure that the schema exists before CREATE TABLE; that's reasonable\ngeneral-purpose behavior (again, it does not know about multi-tenancy).\nIt's the user's responsibility to have already created the schema and\nassigned rights to the right PG user, at which point everything could work\ntransparently (schema creation is skipped because it already exists, CREATE\nTABLE succeeds).\n\n> I would say it is reasonable in theory.  But I cannot think of an actual scenario that would benefit from such a change.  Your stated use case is rejected since you explicitly do not want tenants to be able to create schemas - so the simple act of issuing \"CREATE SCHEMA\" is disallowed. > [...]> Because tenants are not allowed to CREATE SCHEMA you should replace \"CREATE SCHEMA\" in the body of that DO block with \"RAISE ERROR 'Schema foo required but not present!';\"  Or, just tell them to create objects in the presumed present schema and let them see the \"schema not found\" error that would occur in rare case the schema didn't exist.The point here is when layers/ORMs are used, and are not necessarily aware of the multi-tenant scenario. In my concrete real-world complaints here, users instruct the ORM to generate the database schema for them. Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT EXISTS, to ensure that the schema exists before CREATE TABLE; that's reasonable general-purpose behavior (again, it does not know about multi-tenancy). It's the user's responsibility to have already created the schema and assigned rights to the right PG user, at which point everything could work transparently (schema creation is skipped because it already exists, CREATE TABLE succeeds).", "msg_date": "Wed, 15 Dec 2021 19:17:27 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "On Wednesday, December 15, 2021, Shay Rojansky <roji@roji.org> wrote:\n>\n> . Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT\n> EXISTS, to ensure that the schema exists before CREATE TABLE; that's\n> reasonable general-purpose behavior.\n>\n\nIf the user hasn’t specified they want the schema created it’s arguable\nthat executing create schema anyway is reasonable. The orm user should\nknow whether they are expected/able to create the schema as part of their\nresponsibilities and instruct the orm accordingly and expect it to only\ncreate what it has been explicitly directed to create.\n\nDavid J.\n\nOn Wednesday, December 15, 2021, Shay Rojansky <roji@roji.org> wrote:. Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT EXISTS, to ensure that the schema exists before CREATE TABLE; that's reasonable general-purpose behavior.If the user hasn’t specified they want the schema created it’s arguable that executing create schema anyway is reasonable.  The orm user should know whether they are expected/able to create the schema as part of their responsibilities and instruct the orm accordingly and expect it to only create what it has been explicitly directed to create.David J.", "msg_date": "Wed, 15 Dec 2021 11:50:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": ">> Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT\nEXISTS, to ensure that the schema exists before CREATE TABLE; that's\nreasonable general-purpose behavior.\n>\n> If the user hasn’t specified they want the schema created it’s arguable\nthat executing create schema anyway is reasonable. The orm user should\nknow whether they are expected/able to create the schema as part of their\nresponsibilities and instruct the orm accordingly and expect it to only\ncreate what it has been explicitly directed to create.\n\nI think the point being missed here, is that the user isn't interacting\ndirectly with PostgreSQL - they're doing so via an ORM which isn't\nnecessarily aware of everything. Yes, a switch could be added to the ORM\nwhere the user instructs it to not ensure that the schema exists, but\nthat's placing unnecessary burden on the user - having the \"ensure\"\noperation silently no-op (instead of throwing) if the schema already exists\njust makes everything smoother.\n\nPut another way, let's say I introduce a user-facing flag in my ORM to opt\nout of CREATE SCHEMA IF NOT EXISTS. If the user forget to pre-create the\nschema, they would still get an error when trying to create the tables\n(since the schema doesn't exist). So failure to properly set up would\ngenerate an error in any case, either when trying to create the schema (if\nno flag is added), or when trying to create the table (if the flag is\nadded). This makes the flag pretty useless and an unnecesary extra burden\non the user, when the database could simply be ignoring CREATE SCHEMA IF\nNOT EXISTS for the case where the schema already exists.\n\nIs there any specific reason you think this shouldn't be done?\n\n>> Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT EXISTS, to ensure that the schema exists before CREATE TABLE; that's reasonable general-purpose behavior.>> If the user hasn’t specified they want the schema created it’s arguable that executing create schema anyway is reasonable.  The orm user should know whether they are expected/able to create the schema as part of their responsibilities and instruct the orm accordingly and expect it to only create what it has been explicitly directed to create.I think the point being missed here, is that the user isn't interacting directly with PostgreSQL - they're doing so via an ORM which isn't necessarily aware of everything. Yes, a switch could be added to the ORM where the user instructs it to not ensure that the schema exists, but that's placing unnecessary burden on the user - having the \"ensure\" operation silently no-op (instead of throwing) if the schema already exists just makes everything smoother.Put another way, let's say I introduce a user-facing flag in my ORM to opt out of CREATE SCHEMA IF NOT EXISTS. If the user forget to pre-create the schema, they would still get an error when trying to create the tables (since the schema doesn't exist). So failure to properly set up would generate an error in any case, either when trying to create the schema (if no flag is added), or when trying to create the table (if the flag is added). This makes the flag pretty useless and an unnecesary extra burden on the user, when the database could simply be ignoring CREATE SCHEMA IF NOT EXISTS for the case where the schema already exists.Is there any specific reason you think this shouldn't be done?", "msg_date": "Thu, 16 Dec 2021 11:38:09 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "On Thu, Dec 16, 2021 at 3:38 AM Shay Rojansky <roji@roji.org> wrote:\n\n> >> Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT\n> EXISTS, to ensure that the schema exists before CREATE TABLE; that's\n> reasonable general-purpose behavior.\n> >\n> > If the user hasn’t specified they want the schema created it’s arguable\n> that executing create schema anyway is reasonable. The orm user should\n> know whether they are expected/able to create the schema as part of their\n> responsibilities and instruct the orm accordingly and expect it to only\n> create what it has been explicitly directed to create.\n>\n> I think the point being missed here, is that the user isn't interacting\n> directly with PostgreSQL - they're doing so via an ORM which isn't\n> necessarily aware of everything. Yes, a switch could be added to the ORM\n> where the user instructs it to not ensure that the schema exists, but\n> that's placing unnecessary burden on the user - having the \"ensure\"\n> operation silently no-op (instead of throwing) if the schema already exists\n> just makes everything smoother.\n>\n\nI get that point, and even have sympathy for it. But I'm also fond of the\nposition that \"ensuring a schema exists\" is not something the ORM should be\ndoing. But, if you want to do it anyway you can, with a minimal amount of\npl/pgsql code.\n\n> Is there any specific reason you think this shouldn't be done?\n>\n\nAs I said before, your position seems reasonable. I've also got a couple\nof reasonable complaints about IF EXISTS out there. But there is little\ninterest in changing the status quo with regards to the promises that IF\nEXISTS makes. And even with my less constrained views I find that doing\nanything but returning an error to a user that issues CREATE SCHEMA on a\ndatabase for which they lack CREATE privileges is of limited benefit.\nWould I support a well-written patch that made this the new rule?\nProbably. Would I write one or spend time trying to convince others to\nwrite one? No.\n\nDavid J.\n\nOn Thu, Dec 16, 2021 at 3:38 AM Shay Rojansky <roji@roji.org> wrote:>> Now, before creating tables, the ORM generates CREATE SCHEMA IF NOT EXISTS, to ensure that the schema exists before CREATE TABLE; that's reasonable general-purpose behavior.>> If the user hasn’t specified they want the schema created it’s arguable that executing create schema anyway is reasonable.  The orm user should know whether they are expected/able to create the schema as part of their responsibilities and instruct the orm accordingly and expect it to only create what it has been explicitly directed to create.I think the point being missed here, is that the user isn't interacting directly with PostgreSQL - they're doing so via an ORM which isn't necessarily aware of everything. Yes, a switch could be added to the ORM where the user instructs it to not ensure that the schema exists, but that's placing unnecessary burden on the user - having the \"ensure\" operation silently no-op (instead of throwing) if the schema already exists just makes everything smoother.I get that point, and even have sympathy for it.  But I'm also fond of the position that \"ensuring a schema exists\" is not something the ORM should be doing.  But, if you want to do it anyway you can, with a minimal amount of pl/pgsql code.Is there any specific reason you think this shouldn't be done?As I said before, your position seems reasonable.  I've also got a couple of reasonable complaints about IF EXISTS out there.  But there is little interest in changing the status quo with regards to the promises that IF EXISTS makes. And even with my less constrained views I find that doing anything but returning an error to a user that issues CREATE SCHEMA on a database for which they lack CREATE privileges is of limited benefit.  Would I support a well-written patch that made this the new rule?  Probably.  Would I write one or spend time trying to convince others to write one?  No.David J.", "msg_date": "Thu, 16 Dec 2021 09:01:09 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" }, { "msg_contents": "> As I said before, your position seems reasonable. I've also got a couple\nof reasonable complaints about IF EXISTS out there. But there is little\ninterest in changing the status quo with regards to the promises that IF\nEXISTS makes. And even with my less constrained views I find that doing\nanything but returning an error to a user that issues CREATE SCHEMA on a\ndatabase for which they lack CREATE privileges is of limited benefit.\nWould I support a well-written patch that made this the new rule?\nProbably. Would I write one or spend time trying to convince others to\nwrite one? No.\n\nFair enough, thanks.\n\n> As I said before, your position seems reasonable.  I've also got a couple of reasonable complaints about IF EXISTS out there.  But there is little interest in changing the status quo with regards to the promises that IF EXISTS makes. And even with my less constrained views I find that doing anything but returning an error to a user that issues CREATE SCHEMA on a database for which they lack CREATE privileges is of limited benefit.  Would I support a well-written patch that made this the new rule?  Probably.  Would I write one or spend time trying to convince others to write one?  No.Fair enough, thanks.", "msg_date": "Thu, 16 Dec 2021 17:42:50 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Re: Privilege required for IF EXISTS event if the object already\n exists" } ]
[ { "msg_contents": "Hi!\n\nWith the current implementation, for GiST indexes created by doing multiple\ninserts, index tuples match heap tuples order, but it doesn't work that way\nfor sorted method where index tuples on all levels are ordered using\ncomparator provided in sortsupport (z-order for geometry type, for\nexample). This means two tuples that are on the same heap page can be far\napart from one another on an index page, and the heap page may be read\ntwice and prefetch performance will degrade.\n\nI've created a patch intended to improve that by sorting index tuples by\nheap tuples TID order on leaf pages.", "msg_date": "Wed, 15 Dec 2021 17:47:05 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "[PATCH] sort leaf pages by ctid for gist indexes built using sorted\n method" }, { "msg_contents": "\n> With the current implementation, for GiST indexes created by doing multiple inserts, index tuples match heap tuples order, but it doesn't work that way for sorted method where index tuples on all levels are ordered using comparator provided in sortsupport (z-order for geometry type, for example). This means two tuples that are on the same heap page can be far apart from one another on an index page, and the heap page may be read twice and prefetch performance will degrade.\n> \n> I've created a patch intended to improve that by sorting index tuples by heap tuples TID order on leaf pages.\n\nHi!\nThanks you for the patch. The code looks nice and clean.\n From my POV this optimization certainly makes sense.\n\n\nBut can we have some benchmarks showing that this optimization really helps?\n\nI've tried it on my laptop extra build efforts cost us about 5% or CREATE INDEX performance. How big would be benefit for scans that we get?\n\nbefore patch\n\npostgres=# create table x as select point (random(),random()) from generate_series(1,3000000,1);\nSELECT 3000000\npostgres=# \\timing \nTiming is on.\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1872,503 ms (00:01,873)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1797,329 ms (00:01,797)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1787,362 ms (00:01,787)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1793,545 ms (00:01,794)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1805,572 ms (00:01,806)\n\nAfter patch\n\npostgres=# create table x as select point (random(),random()) from generate_series(1,3000000,1);\nSELECT 3000000\npostgres=# \\timing \nTiming is on.\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 2134,448 ms (00:02,134)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1945,978 ms (00:01,946)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1965,045 ms (00:01,965)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1973,248 ms (00:01,973)\npostgres=# create index ON x using gist (point );\nCREATE INDEX\nTime: 1970,578 ms (00:01,971)\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Thu, 16 Dec 2021 14:49:25 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] sort leaf pages by ctid for gist indexes built using\n sorted method" }, { "msg_contents": "Hi,\n\nOn 2021-12-16 14:49:25 +0500, Andrey Borodin wrote:\n> \n> > With the current implementation, for GiST indexes created by doing multiple inserts, index tuples match heap tuples order, but it doesn't work that way for sorted method where index tuples on all levels are ordered using comparator provided in sortsupport (z-order for geometry type, for example). This means two tuples that are on the same heap page can be far apart from one another on an index page, and the heap page may be read twice and prefetch performance will degrade.\n> > \n> > I've created a patch intended to improve that by sorting index tuples by heap tuples TID order on leaf pages.\n> \n> Thanks you for the patch. The code looks nice and clean.\n\nThe patch fails currently doesn't apply: http://cfbot.cputube.org/patch_37_3454.log\n\n\n> But can we have some benchmarks showing that this optimization really helps?\n\nAs there hasn't been a response to this even in the last CF, I'm going to mark\nthis entry as returned with feedback (IMO shouldn't even have been moved to\nthis CF).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:23:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] sort leaf pages by ctid for gist indexes built using\n sorted method" } ]
[ { "msg_contents": "OK, old_branches_of_interest.txt now exists on the buildfarm server, and\nthe code has been modified to take notice of it (i.e. to accept builds\nfor branches listed there). The contents are the non-live versions from\n9.2 on.\n\nI have set up a test buildfarm client (which will eventually report\nunder the name 'godwit') alongside crake (Fedora 34). So far testing has\nrun smoothly, there are only two glitches:\n\n * 9.3 and 9.2 don't have a show_dl_suffix make target. This would\n require backpatching b40cb99b85 and d9cdb1ba9e. That's a tiny\n change, and I propose to do it shortly unless there's an objection.\n * I need to undo the removal of client logic that supported 9.2's\n unix_socket_directory setting as opposed to the later\n unix_socket_directories.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 12:15:52 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Buildfarm support for older versions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I have set up a test buildfarm client (which will eventually report\n> under the name 'godwit') alongside crake (Fedora 34). So far testing has\n> run smoothly, there are only two glitches:\n\n> * 9.3 and 9.2 don't have a show_dl_suffix make target. This would\n> require backpatching b40cb99b85 and d9cdb1ba9e. That's a tiny\n> change, and I propose to do it shortly unless there's an objection.\n\nNot really user-visible, so I can't see a problem with it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Dec 2021 16:10:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/15/2021 11:15 am, Andrew Dunstan wrote:\n> OK, old_branches_of_interest.txt now exists on the buildfarm server, \n> and\n> the code has been modified to take notice of it (i.e. to accept builds\n> for branches listed there). The contents are the non-live versions from\n> 9.2 on.\n> \n> I have set up a test buildfarm client (which will eventually report\n> under the name 'godwit') alongside crake (Fedora 34). So far testing \n> has\n> run smoothly, there are only two glitches:\n> \n> * 9.3 and 9.2 don't have a show_dl_suffix make target. This would\n> require backpatching b40cb99b85 and d9cdb1ba9e. That's a tiny\n> change, and I propose to do it shortly unless there's an objection.\n> * I need to undo the removal of client logic that supported 9.2's\n> unix_socket_directory setting as opposed to the later\n> unix_socket_directories.\n> \n> cheers\n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nWould a FreeBSD head (peripatus or a new animal) help?\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 15 Dec 2021 20:36:58 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/15/21 21:36, Larry Rosenman wrote:\n> On 12/15/2021 11:15 am, Andrew Dunstan wrote:\n>> OK, old_branches_of_interest.txt now exists on the buildfarm server, and\n>> the code has been modified to take notice of it (i.e. to accept builds\n>> for branches listed there). The contents are the non-live versions from\n>> 9.2 on.\n>>\n>> I have set up a test buildfarm client (which will eventually report\n>> under the name 'godwit') alongside crake (Fedora 34). So far testing has\n>> run smoothly, there are only two glitches:\n>>\n>>   * 9.3 and 9.2 don't have a show_dl_suffix make target. This would\n>>     require backpatching b40cb99b85 and d9cdb1ba9e. That's a tiny\n>>     change, and I propose to do it shortly unless there's an objection.\n>>   * I need to undo the removal of client logic that supported 9.2's\n>>     unix_socket_directory setting as opposed to the later\n>>     unix_socket_directories.\n>>\n>>\n>\n> Would a FreeBSD head (peripatus or a new animal) help?\n\n\nA new animal, because we're not supporting every build option. On the\nnon-live branches you really only want:\n\n    --enable-debug --enable-cassert --enable-nls\n\n    --enable-tap-tests --with-perl\n\nYou can make it share the same storage as your existing animal (godwit and crake do this). The client is smart enough to manage locks of several animals appropriately.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 11:02:53 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/16/2021 10:02 am, Andrew Dunstan wrote:\n> On 12/15/21 21:36, Larry Rosenman wrote:\n>> On 12/15/2021 11:15 am, Andrew Dunstan wrote:\n>>> OK, old_branches_of_interest.txt now exists on the buildfarm server, \n>>> and\n>>> the code has been modified to take notice of it (i.e. to accept \n>>> builds\n>>> for branches listed there). The contents are the non-live versions \n>>> from\n>>> 9.2 on.\n>>> \n>>> I have set up a test buildfarm client (which will eventually report\n>>> under the name 'godwit') alongside crake (Fedora 34). So far testing \n>>> has\n>>> run smoothly, there are only two glitches:\n>>> \n>>>   * 9.3 and 9.2 don't have a show_dl_suffix make target. This would\n>>>     require backpatching b40cb99b85 and d9cdb1ba9e. That's a tiny\n>>>     change, and I propose to do it shortly unless there's an \n>>> objection.\n>>>   * I need to undo the removal of client logic that supported 9.2's\n>>>     unix_socket_directory setting as opposed to the later\n>>>     unix_socket_directories.\n>>> \n>>> \n>> \n>> Would a FreeBSD head (peripatus or a new animal) help?\n> \n> \n> A new animal, because we're not supporting every build option. On the\n> non-live branches you really only want:\n> \n>     --enable-debug --enable-cassert --enable-nls\n> \n>     --enable-tap-tests --with-perl\n> \n> You can make it share the same storage as your existing animal (godwit\n> and crake do this). The client is smart enough to manage locks of\n> several animals appropriately.\n> \n> cheers\n> \n> andrew\n> \n> --\n\nSo just create a new animal / config file, and set those options?\nand FreeBSD head / main would be useful?\n(Currently FreeBSD 14 and clang 13).\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 16 Dec 2021 10:11:59 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/16/21 11:11, Larry Rosenman wrote:\n>\n>>\n>> A new animal, because we're not supporting every build option. On the\n>> non-live branches you really only want:\n>>\n>>     --enable-debug --enable-cassert --enable-nls\n>>\n>>     --enable-tap-tests --with-perl\n>>\n>> You can make it share the same storage as your existing animal (godwit\n>> and crake do this). The client is smart enough to manage locks of\n>> several animals appropriately.\n>>\n>>\n>\n> So just create a new animal / config file, and set those options?\n> and FreeBSD head / main would be useful?\n> (Currently FreeBSD 14 and clang 13).\n>\n\nSure. I think if we get coverage for modern Linux, FreeBSD and Windows\nwe should be in good shape.\n\nI doubt we need a heck of a lot of animals - there's not going to be\nmuch going on here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:17:18 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/16/2021 11:17 am, Andrew Dunstan wrote:\n> On 12/16/21 11:11, Larry Rosenman wrote:\n>> \n>>> \n>>> A new animal, because we're not supporting every build option. On the\n>>> non-live branches you really only want:\n>>> \n>>>     --enable-debug --enable-cassert --enable-nls\n>>> \n>>>     --enable-tap-tests --with-perl\n>>> \n>>> You can make it share the same storage as your existing animal \n>>> (godwit\n>>> and crake do this). The client is smart enough to manage locks of\n>>> several animals appropriately.\n>>> \n>>> \n>> \n>> So just create a new animal / config file, and set those options?\n>> and FreeBSD head / main would be useful?\n>> (Currently FreeBSD 14 and clang 13).\n>> \n> \n> Sure. I think if we get coverage for modern Linux, FreeBSD and Windows\n> we should be in good shape.\n> \n> I doubt we need a heck of a lot of animals - there's not going to be\n> much going on here.\n> \n> \n> cheers\n> \n> \n> andrew\n> \n\nWould you mind terribly giving me the exact steps?\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 16 Dec 2021 11:26:49 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/16/21 12:26, Larry Rosenman wrote:\n> On 12/16/2021 11:17 am, Andrew Dunstan wrote:\n>> On 12/16/21 11:11, Larry Rosenman wrote:\n>>>\n>>>>\n>>>> A new animal, because we're not supporting every build option. On the\n>>>> non-live branches you really only want:\n>>>>\n>>>>     --enable-debug --enable-cassert --enable-nls\n>>>>\n>>>>     --enable-tap-tests --with-perl\n>>>>\n>>>> You can make it share the same storage as your existing animal (godwit\n>>>> and crake do this). The client is smart enough to manage locks of\n>>>> several animals appropriately.\n>>>>\n>>>>\n>>>\n>>> So just create a new animal / config file, and set those options?\n>>> and FreeBSD head / main would be useful?\n>>> (Currently FreeBSD 14 and clang 13).\n>>>\n>>\n>> Sure. I think if we get coverage for modern Linux, FreeBSD and Windows\n>> we should be in good shape.\n>>\n>> I doubt we need a heck of a lot of animals - there's not going to be\n>> much going on here.\n>>\n>>\n>\n> Would you mind terribly giving me the exact steps?\n\n\n * register a new animal with the same details\n * copy your existing config file to $new_animal.conf\n * edit the file and change the animal name and secret, the config_opts\n as above, and remove TestUpgrade form the modules setting\n * change branches_to_build to [qw(\n             REL9_2_STABLE REL9_3_STABLE REL9_4_STABLE\n             REL9_5_STABLE REL9_6_STABLE)]\n * you should probably unset CCACHEDIR in both config files\n * test with ./run_branches --test --config $newanimal.conf --run-all\n\ncheers\n\nandew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 15:47:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/16/2021 2:47 pm, Andrew Dunstan wrote:\n> On 12/16/21 12:26, Larry Rosenman wrote:\n>> On 12/16/2021 11:17 am, Andrew Dunstan wrote:\n>>> On 12/16/21 11:11, Larry Rosenman wrote:\n>>>> \n>>>>> \n>>>>> A new animal, because we're not supporting every build option. On \n>>>>> the\n>>>>> non-live branches you really only want:\n>>>>> \n>>>>>     --enable-debug --enable-cassert --enable-nls\n>>>>> \n>>>>>     --enable-tap-tests --with-perl\n>>>>> \n>>>>> You can make it share the same storage as your existing animal \n>>>>> (godwit\n>>>>> and crake do this). The client is smart enough to manage locks of\n>>>>> several animals appropriately.\n>>>>> \n>>>>> \n>>>> \n>>>> So just create a new animal / config file, and set those options?\n>>>> and FreeBSD head / main would be useful?\n>>>> (Currently FreeBSD 14 and clang 13).\n>>>> \n>>> \n>>> Sure. I think if we get coverage for modern Linux, FreeBSD and \n>>> Windows\n>>> we should be in good shape.\n>>> \n>>> I doubt we need a heck of a lot of animals - there's not going to be\n>>> much going on here.\n>>> \n>>> \n>> \n>> Would you mind terribly giving me the exact steps?\n> \n> \n> * register a new animal with the same details\n> * copy your existing config file to $new_animal.conf\n> * edit the file and change the animal name and secret, the \n> config_opts\n> as above, and remove TestUpgrade form the modules setting\n> * change branches_to_build to [qw(\n>             REL9_2_STABLE REL9_3_STABLE REL9_4_STABLE\n>             REL9_5_STABLE REL9_6_STABLE)]\n> * you should probably unset CCACHEDIR in both config files\n> * test with ./run_branches --test --config $newanimal.conf --run-all\n> \n\n\nI get:\nERROR for site owner:\nInvalid domain for site key\n\non https://pgbuildfarm.org/cgi-bin/register-form.pl\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 16 Dec 2021 14:53:45 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/16/21 15:53, Larry Rosenman wrote:\n>\n>\n> I get:\n> ERROR for site owner:\n> Invalid domain for site key\n>\n> on https://pgbuildfarm.org/cgi-bin/register-form.pl\n\n\ntry https://buildfarm.postgresql.org/cgi-bin/register-form.pl\n\n\ncheers\n\n\nandrew\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 16:23:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/16/2021 3:23 pm, Andrew Dunstan wrote:\n> On 12/16/21 15:53, Larry Rosenman wrote:\n>> \n>> \n>> I get:\n>> ERROR for site owner:\n>> Invalid domain for site key\n>> \n>> on https://pgbuildfarm.org/cgi-bin/register-form.pl\n> \n> \n> try https://buildfarm.postgresql.org/cgi-bin/register-form.pl\n> \n> \n> cheers\n> \n> \n> andrew\nI filled out that form on the 16th, and haven't gotten a new animal \nassignment. Is there\na problem with my data?\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 21 Dec 2021 14:06:44 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/21/21 15:06, Larry Rosenman wrote:\n> I filled out that form on the 16th, and haven't gotten a new animal\n> assignment.  Is there\n> a problem with my data?\n\n\n\nIt's a manual process, done when your friendly admins have time. I have\napproved it now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 22 Dec 2021 08:20:11 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/22/2021 7:20 am, Andrew Dunstan wrote:\n> On 12/21/21 15:06, Larry Rosenman wrote:\n>> I filled out that form on the 16th, and haven't gotten a new animal\n>> assignment.  Is there\n>> a problem with my data?\n> \n> \n> \n> It's a manual process, done when your friendly admins have time. I have\n> approved it now.\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\nREL9_2_STABLE make dies on:\ngmake[4]: Entering directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend/utils'\n'/usr/bin/perl' ./generate-errcodes.pl \n../../../src/backend/utils/errcodes.txt > errcodes.h\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -I../../src/port -DFRONTEND \n-I../../src/include -I/usr/local/include -c -o path.o path.c\ngmake[4]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend/utils'\nprereqdir=`cd 'utils/' >/dev/null && pwd` && \\\n cd '../../src/include/utils/' && rm -f errcodes.h && \\\n ln -s \"$prereqdir/errcodes.h\" .\ngmake[3]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend'\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing\n$ tail -30 make.log\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:109\n>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:126\n>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>> ../../src/port/libpgport.a\ncc: error: linker command failed with exit code 1 (use -v to see \ninvocation)\ngmake[3]: *** [../../src/Makefile.port:20: timetravel.so] Error 1\ngmake[3]: *** Waiting for unfinished jobs....\nrm moddatetime.o autoinc.o refint.o timetravel.o insert_username.o\ngmake[3]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/contrib/spi'\ngmake[2]: *** [GNUmakefile:126: submake-contrib-spi] Error 2\ngmake[2]: *** Waiting for unfinished jobs....\ngmake[2]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/test/regress'\ngmake[1]: *** [Makefile:33: all-test/regress-recurse] Error 2\ngmake[1]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n$\n\nThe other branches are still running.\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 22 Dec 2021 19:16:21 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/22/2021 7:16 pm, Larry Rosenman wrote:\n> On 12/22/2021 7:20 am, Andrew Dunstan wrote:\n>> On 12/21/21 15:06, Larry Rosenman wrote:\n>>> I filled out that form on the 16th, and haven't gotten a new animal\n>>> assignment.  Is there\n>>> a problem with my data?\n>> \n>> \n>> \n>> It's a manual process, done when your friendly admins have time. I \n>> have\n>> approved it now.\n>> \n>> \n>> cheers\n>> \n>> \n>> andrew\n>> \n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n> \n> \n> REL9_2_STABLE make dies on:\n> gmake[4]: Entering directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend/utils'\n> '/usr/bin/perl' ./generate-errcodes.pl\n> ../../../src/backend/utils/errcodes.txt > errcodes.h\n> cc -O2 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels\n> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n> -fwrapv -Wno-unused-command-line-argument\n> -Wno-compound-token-split-by-macro -Wno-sometimes-uninitialized -g\n> -I../../src/port -DFRONTEND -I../../src/include -I/usr/local/include\n> -c -o path.o path.c\n> gmake[4]: Leaving directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend/utils'\n> prereqdir=`cd 'utils/' >/dev/null && pwd` && \\\n> cd '../../src/include/utils/' && rm -f errcodes.h && \\\n> ln -s \"$prereqdir/errcodes.h\" .\n> gmake[3]: Leaving directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/backend'\n> cc -O2 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels\n> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n> $ tail -30 make.log\n> ld: error: relocation R_X86_64_PC32 cannot be used against symbol\n> __mb_sb_limit; recompile with -fPIC\n>>>> defined in /lib/libc.so.7\n>>>> referenced by pgstrcasecmp.c:109\n>>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>>> ../../src/port/libpgport.a\n> \n> ld: error: relocation R_X86_64_PC32 cannot be used against symbol\n> _CurrentRuneLocale; recompile with -fPIC\n>>>> defined in /lib/libc.so.7\n>>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>>> ../../src/port/libpgport.a\n> \n> ld: error: relocation R_X86_64_PC32 cannot be used against symbol\n> __mb_sb_limit; recompile with -fPIC\n>>>> defined in /lib/libc.so.7\n>>>> referenced by pgstrcasecmp.c:126\n>>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>>> ../../src/port/libpgport.a\n> \n> ld: error: relocation R_X86_64_PC32 cannot be used against symbol\n> _CurrentRuneLocale; recompile with -fPIC\n>>>> defined in /lib/libc.so.7\n>>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>>> ../../src/port/libpgport.a\n> cc: error: linker command failed with exit code 1 (use -v to see \n> invocation)\n> gmake[3]: *** [../../src/Makefile.port:20: timetravel.so] Error 1\n> gmake[3]: *** Waiting for unfinished jobs....\n> rm moddatetime.o autoinc.o refint.o timetravel.o insert_username.o\n> gmake[3]: Leaving directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/contrib/spi'\n> gmake[2]: *** [GNUmakefile:126: submake-contrib-spi] Error 2\n> gmake[2]: *** Waiting for unfinished jobs....\n> gmake[2]: Leaving directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/test/regress'\n> gmake[1]: *** [Makefile:33: all-test/regress-recurse] Error 2\n> gmake[1]: Leaving directory\n> '/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src'\n> gmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n> $\n> \n> The other branches are still running.\n\n\nHere's the full run:\n$ bin/latest/run_branches.pl --test --config $(pwd)/conf/gerenuk.conf \n--run-all\nWed Dec 22 19:05:53 2021: buildfarm run for gerenuk:REL9_2_STABLE \nstarting\ngerenuk:REL9_2_STABLE [19:05:54] checking out source ...\ngerenuk:REL9_2_STABLE [19:06:06] checking if build run needed ...\ngerenuk:REL9_2_STABLE [19:06:06] copying source to pgsql.build ...\ngerenuk:REL9_2_STABLE [19:06:08] running configure ...\ngerenuk:REL9_2_STABLE [19:06:36] running make ...\nBranch: REL9_2_STABLE\nStage Make failed with status 2\nWed Dec 22 19:08:21 2021: buildfarm run for gerenuk:REL9_3_STABLE \nstarting\ngerenuk:REL9_3_STABLE [19:08:21] checking out source ...\ngerenuk:REL9_3_STABLE [19:08:27] checking if build run needed ...\ngerenuk:REL9_3_STABLE [19:08:28] copying source to pgsql.build ...\ngerenuk:REL9_3_STABLE [19:08:29] running configure ...\ngerenuk:REL9_3_STABLE [19:08:52] running make ...\ngerenuk:REL9_3_STABLE [19:10:38] running make check ...\ngerenuk:REL9_3_STABLE [19:11:05] running make contrib ...\ngerenuk:REL9_3_STABLE [19:11:15] running make install ...\ngerenuk:REL9_3_STABLE [19:11:19] running make contrib install ...\ngerenuk:REL9_3_STABLE [19:11:21] checking pg_upgrade\ngerenuk:REL9_3_STABLE [19:12:29] running make check miscellaneous \nmodules ...\ngerenuk:REL9_3_STABLE [19:12:29] setting up db cluster (C)...\ngerenuk:REL9_3_STABLE [19:12:32] starting db (C)...\ngerenuk:REL9_3_STABLE [19:12:33] running make installcheck (C)...\ngerenuk:REL9_3_STABLE [19:13:00] restarting db (C)...\ngerenuk:REL9_3_STABLE [19:13:03] running make isolation check ...\ngerenuk:REL9_3_STABLE [19:14:00] restarting db (C)...\ngerenuk:REL9_3_STABLE [19:14:03] running make PL installcheck (C)...\ngerenuk:REL9_3_STABLE [19:14:04] restarting db (C)...\ngerenuk:REL9_3_STABLE [19:14:07] running make contrib installcheck \n(C)...\ngerenuk:REL9_3_STABLE [19:14:24] stopping db (C)...\ngerenuk:REL9_3_STABLE [19:14:26] running make ecpg check ...\ngerenuk:REL9_3_STABLE [19:14:47] OK\nBranch: REL9_3_STABLE\nAll stages succeeded\nWed Dec 22 19:14:48 2021: buildfarm run for gerenuk:REL9_4_STABLE \nstarting\ngerenuk:REL9_4_STABLE [19:14:48] checking out source ...\ngerenuk:REL9_4_STABLE [19:14:52] checking if build run needed ...\ngerenuk:REL9_4_STABLE [19:14:52] copying source to pgsql.build ...\ngerenuk:REL9_4_STABLE [19:15:14] running configure ...\ngerenuk:REL9_4_STABLE [19:15:32] running make ...\ngerenuk:REL9_4_STABLE [19:17:22] running make check ...\ngerenuk:REL9_4_STABLE [19:17:49] running make contrib ...\ngerenuk:REL9_4_STABLE [19:18:00] running make install ...\ngerenuk:REL9_4_STABLE [19:18:03] running make contrib install ...\ngerenuk:REL9_4_STABLE [19:18:06] checking pg_upgrade\ngerenuk:REL9_4_STABLE [19:19:11] checking test-decoding\ngerenuk:REL9_4_STABLE [19:20:13] running make check miscellaneous \nmodules ...\ngerenuk:REL9_4_STABLE [19:20:13] running bin test initdb ...\nBranch: REL9_4_STABLE\nStage initdbCheck failed with status 2\nWed Dec 22 19:20:20 2021: buildfarm run for gerenuk:REL9_5_STABLE \nstarting\ngerenuk:REL9_5_STABLE [19:20:20] checking out source ...\ngerenuk:REL9_5_STABLE [19:20:26] checking if build run needed ...\ngerenuk:REL9_5_STABLE [19:20:26] copying source to pgsql.build ...\ngerenuk:REL9_5_STABLE [19:20:56] running configure ...\ngerenuk:REL9_5_STABLE [19:21:17] running make ...\ngerenuk:REL9_5_STABLE [19:23:16] running make check ...\ngerenuk:REL9_5_STABLE [19:23:43] running make contrib ...\ngerenuk:REL9_5_STABLE [19:23:53] running make testmodules ...\ngerenuk:REL9_5_STABLE [19:23:53] running make install ...\ngerenuk:REL9_5_STABLE [19:23:57] running make contrib install ...\ngerenuk:REL9_5_STABLE [19:23:59] running make testmodules install ...\ngerenuk:REL9_5_STABLE [19:23:59] checking pg_upgrade\ngerenuk:REL9_5_STABLE [19:25:09] checking test-decoding\ngerenuk:REL9_5_STABLE [19:26:02] running make check miscellaneous \nmodules ...\ngerenuk:REL9_5_STABLE [19:26:02] running bin test initdb ...\nCan't stat pgsql.build/src/bin/initdb/tmp_check: No such file or \ndirectory\n at /home/pgbuildfarm/bin/build-farm-13.1/PGBuild/Utils.pm line 222.\nBranch: REL9_5_STABLE\nStage initdbCheck failed with status 2\nWed Dec 22 19:26:04 2021: buildfarm run for gerenuk:REL9_6_STABLE \nstarting\ngerenuk:REL9_6_STABLE [19:26:04] checking out source ...\ngerenuk:REL9_6_STABLE [19:26:12] checking if build run needed ...\ngerenuk:REL9_6_STABLE [19:26:12] copying source to pgsql.build ...\ngerenuk:REL9_6_STABLE [19:26:47] running configure ...\ngerenuk:REL9_6_STABLE [19:27:06] running make ...\ngerenuk:REL9_6_STABLE [19:29:13] running make check ...\ngerenuk:REL9_6_STABLE [19:29:41] running make contrib ...\ngerenuk:REL9_6_STABLE [19:29:51] running make testmodules ...\ngerenuk:REL9_6_STABLE [19:29:51] running make install ...\ngerenuk:REL9_6_STABLE [19:29:55] running make contrib install ...\ngerenuk:REL9_6_STABLE [19:29:57] running make testmodules install ...\ngerenuk:REL9_6_STABLE [19:29:58] checking pg_upgrade\ngerenuk:REL9_6_STABLE [19:31:09] checking test-decoding\ngerenuk:REL9_6_STABLE [19:32:02] running make check miscellaneous \nmodules ...\ngerenuk:REL9_6_STABLE [19:32:02] running bin test initdb ...\nCan't stat pgsql.build/src/bin/initdb/tmp_check: No such file or \ndirectory\n at /home/pgbuildfarm/bin/build-farm-13.1/PGBuild/Utils.pm line 222.\nBranch: REL9_6_STABLE\nStage initdbCheck failed with status 2\n$\n\nLooks like there are \"issues\".\n\nShould I set it up on cron? Or does someone want to look first?\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 22 Dec 2021 19:33:16 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> REL9_2_STABLE make dies on:\n> ld: error: relocation R_X86_64_PC32 cannot be used against symbol \n> _CurrentRuneLocale; recompile with -fPIC\n> [etc]\n\nWhat configure options did you use?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Dec 2021 22:34:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/22/2021 9:34 pm, Tom Lane wrote:\n> Larry Rosenman <ler@lerctr.org> writes:\n>> REL9_2_STABLE make dies on:\n>> ld: error: relocation R_X86_64_PC32 cannot be used against symbol\n>> _CurrentRuneLocale; recompile with -fPIC\n>> [etc]\n> \n> What configure options did you use?\n> \n> \t\t\tregards, tom lane\n\nconfig_opts =>[\n qw(\n --enable-cassert\n --enable-debug\n --enable-nls\n --enable-tap-tests\n --with-perl\n )\n ],\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 22 Dec 2021 21:45:26 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> On 12/22/2021 9:34 pm, Tom Lane wrote:\n>> What configure options did you use?\n\n> config_opts =>[\n> qw(\n> --enable-cassert\n> --enable-debug\n> --enable-nls\n> --enable-tap-tests\n> --with-perl\n> )\n> ],\n\nDoes it work if you drop --enable-nls? (It'd likely be worth fixing\nif so, but I'm trying to narrow the possible causes.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Dec 2021 22:59:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/22/2021 9:59 pm, Tom Lane wrote:\n> Larry Rosenman <ler@lerctr.org> writes:\n>> On 12/22/2021 9:34 pm, Tom Lane wrote:\n>>> What configure options did you use?\n> \n>> config_opts =>[\n>> qw(\n>> --enable-cassert\n>> --enable-debug\n>> --enable-nls\n>> --enable-tap-tests\n>> --with-perl\n>> )\n>> ],\n> \n> Does it work if you drop --enable-nls? (It'd likely be worth fixing\n> if so, but I'm trying to narrow the possible causes.)\n> \n> \t\t\tregards, tom lane\n\n\nNope...\n\ngmake[3]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/contrib/dummy_seclabel'\ncp ../../../contrib/dummy_seclabel/dummy_seclabel.so dummy_seclabel.so\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -fPIC -DPIC -L../../src/port \n-L/usr/local/lib -Wl,--as-needed \n-Wl,-R'/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/lib' \n-L../../src/port -lpgport -shared -o moddatetime.so moddatetime.o\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -fPIC -DPIC -L../../src/port \n-L/usr/local/lib -Wl,--as-needed \n-Wl,-R'/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/lib' \n-L../../src/port -lpgport -shared -o insert_username.so \ninsert_username.o\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -fPIC -DPIC -L../../src/port \n-L/usr/local/lib -Wl,--as-needed \n-Wl,-R'/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/lib' \n-L../../src/port -lpgport -shared -o autoinc.so autoinc.o\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -fPIC -DPIC -L../../src/port \n-L/usr/local/lib -Wl,--as-needed \n-Wl,-R'/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/lib' \n-L../../src/port -lpgport -shared -o timetravel.so timetravel.o\ncc -O2 -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv \n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro \n-Wno-sometimes-uninitialized -g -fPIC -DPIC -L../../src/port \n-L/usr/local/lib -Wl,--as-needed \n-Wl,-R'/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/lib' \n-L../../src/port -lpgport -shared -o refint.so refint.o\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:37\n>>> pgstrcasecmp.o:(pg_strcasecmp) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:37\n>>> pgstrcasecmp.o:(pg_strcasecmp) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:70\n>>> pgstrcasecmp.o:(pg_strncasecmp) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:70\n>>> pgstrcasecmp.o:(pg_strncasecmp) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:109\n>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>> pgstrcasecmp.o:(pg_toupper) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n__mb_sb_limit; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by pgstrcasecmp.c:126\n>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>> ../../src/port/libpgport.a\n\nld: error: relocation R_X86_64_PC32 cannot be used against symbol \n_CurrentRuneLocale; recompile with -fPIC\n>>> defined in /lib/libc.so.7\n>>> referenced by runetype.h:0 (/usr/include/runetype.h:0)\n>>> pgstrcasecmp.o:(pg_tolower) in archive \n>>> ../../src/port/libpgport.a\ncc: error: linker command failed with exit code 1 (use -v to see \ninvocation)\ngmake[3]: *** [../../src/Makefile.port:20: timetravel.so] Error 1\ngmake[3]: *** Waiting for unfinished jobs....\nrm moddatetime.o autoinc.o refint.o timetravel.o insert_username.o\ngmake[3]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/contrib/spi'\ngmake[2]: *** [GNUmakefile:126: submake-contrib-spi] Error 2\ngmake[2]: *** Waiting for unfinished jobs....\ngmake[2]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src/test/regress'\ngmake[1]: *** [Makefile:33: all-test/regress-recurse] Error 2\ngmake[1]: Leaving directory \n'/home/pgbuildfarm/buildroot/REL9_2_STABLE/pgsql.build/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n$\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 22 Dec 2021 22:07:51 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> On 12/22/2021 9:59 pm, Tom Lane wrote:\n>> Does it work if you drop --enable-nls? (It'd likely be worth fixing\n>> if so, but I'm trying to narrow the possible causes.)\n\n> Nope...\n\nOK. Since 9.3 succeeds, it seems like it's a link problem\nwe fixed at some point. Can you bisect to find where we\nfixed it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Dec 2021 23:15:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/22/2021 10:15 pm, Tom Lane wrote:\n> Larry Rosenman <ler@lerctr.org> writes:\n>> On 12/22/2021 9:59 pm, Tom Lane wrote:\n>>> Does it work if you drop --enable-nls? (It'd likely be worth fixing\n>>> if so, but I'm trying to narrow the possible causes.)\n> \n>> Nope...\n> \n> OK. Since 9.3 succeeds, it seems like it's a link problem\n> we fixed at some point. Can you bisect to find where we\n> fixed it?\n> \n> \t\t\tregards, tom lane\n\nI can try -- I haven't been very good at that.\n\nI can give you access to the machine and the id the Buildfarm runs \nunder.\n\n(or give me a good process starting from a buildfarm layout).\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Wed, 22 Dec 2021 22:20:25 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/22/21 23:20, Larry Rosenman wrote:\n> On 12/22/2021 10:15 pm, Tom Lane wrote:\n>> Larry Rosenman <ler@lerctr.org> writes:\n>>> On 12/22/2021 9:59 pm, Tom Lane wrote:\n>>>> Does it work if you drop --enable-nls?  (It'd likely be worth fixing\n>>>> if so, but I'm trying to narrow the possible causes.)\n>>\n>>> Nope...\n>>\n>> OK.  Since 9.3 succeeds, it seems like it's a link problem\n>> we fixed at some point.  Can you bisect to find where we\n>> fixed it?\n>>\n>>             regards, tom lane\n>\n> I can try -- I haven't been very good at that.\n>\n> I can give you access to the machine and the id the Buildfarm runs under.\n>\n> (or give me a good process starting from a buildfarm layout).\n>\n>\n\nI will work on it on my FBSD setup.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 08:50:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/23/21 08:50, Andrew Dunstan wrote:\n> On 12/22/21 23:20, Larry Rosenman wrote:\n>> On 12/22/2021 10:15 pm, Tom Lane wrote:\n>>> Larry Rosenman <ler@lerctr.org> writes:\n>>>> On 12/22/2021 9:59 pm, Tom Lane wrote:\n>>>>> Does it work if you drop --enable-nls?  (It'd likely be worth fixing\n>>>>> if so, but I'm trying to narrow the possible causes.)\n>>>> Nope...\n>>> OK.  Since 9.3 succeeds, it seems like it's a link problem\n>>> we fixed at some point.  Can you bisect to find where we\n>>> fixed it?\n>>>\n>>>             regards, tom lane\n>> I can try -- I haven't been very good at that.\n>>\n>> I can give you access to the machine and the id the Buildfarm runs under.\n>>\n>> (or give me a good process starting from a buildfarm layout).\n>>\n>>\n> I will work on it on my FBSD setup.\n>\n>\n\nFor the 9.2 error, try setting this in the config_env stanza:\n\n\n    CFLAGS => '-O2 -fPIC',\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 11:13:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/23/2021 10:13 am, Andrew Dunstan wrote:\n> On 12/23/21 08:50, Andrew Dunstan wrote:\n>> On 12/22/21 23:20, Larry Rosenman wrote:\n>>> On 12/22/2021 10:15 pm, Tom Lane wrote:\n>>>> Larry Rosenman <ler@lerctr.org> writes:\n>>>>> On 12/22/2021 9:59 pm, Tom Lane wrote:\n>>>>>> Does it work if you drop --enable-nls?  (It'd likely be worth \n>>>>>> fixing\n>>>>>> if so, but I'm trying to narrow the possible causes.)\n>>>>> Nope...\n>>>> OK.  Since 9.3 succeeds, it seems like it's a link problem\n>>>> we fixed at some point.  Can you bisect to find where we\n>>>> fixed it?\n>>>> \n>>>>             regards, tom lane\n>>> I can try -- I haven't been very good at that.\n>>> \n>>> I can give you access to the machine and the id the Buildfarm runs \n>>> under.\n>>> \n>>> (or give me a good process starting from a buildfarm layout).\n>>> \n>>> \n>> I will work on it on my FBSD setup.\n>> \n>> \n> \n> For the 9.2 error, try setting this in the config_env stanza:\n> \n> \n>     CFLAGS => '-O2 -fPIC',\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nThat got us further, but it dies on startdb:\n$ cat startdb-C-1.log\nwaiting for server to start.... stopped waiting\npg_ctl: could not start server\nExamine the log output.\n=========== db log file ==========\nLOG: unrecognized configuration parameter \"unix_socket_directories\" in \nfile \n\"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\" \nline 576\nFATAL: configuration file \n\"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\" \ncontains errors\n$\n\nAnd we have the errors on the other branches with a temp(?) directory.\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 23 Dec 2021 10:27:48 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/23/21 11:27, Larry Rosenman wrote:\n>\n>>>\n>>\n>> For the 9.2 error, try setting this in the config_env stanza:\n>>\n>>\n>>     CFLAGS => '-O2 -fPIC',\n>>\n>>\n>>\n>\n> That got us further, but it dies on startdb:\n> $ cat startdb-C-1.log\n> waiting for server to start.... stopped waiting\n> pg_ctl: could not start server\n> Examine the log output.\n> =========== db log file ==========\n> LOG:  unrecognized configuration parameter \"unix_socket_directories\"\n> in file\n> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n> line 576\n> FATAL:  configuration file\n> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n> contains errors\n> $\n>\n> And we have the errors on the other branches with a temp(?) directory.\n\n\nlooks like it's picking up the wrong perl libraries. Please show us the\noutput of\n\n grep -v secret /home/pgbuildfarm/buildroot/REL9_2_STABLE/$animal.lastrun-logs/web-txn.data\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 12:23:43 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/23/2021 11:23 am, Andrew Dunstan wrote:\n> On 12/23/21 11:27, Larry Rosenman wrote:\n>> \n>>>> \n>>> \n>>> For the 9.2 error, try setting this in the config_env stanza:\n>>> \n>>> \n>>>     CFLAGS => '-O2 -fPIC',\n>>> \n>>> \n>>> \n>> \n>> That got us further, but it dies on startdb:\n>> $ cat startdb-C-1.log\n>> waiting for server to start.... stopped waiting\n>> pg_ctl: could not start server\n>> Examine the log output.\n>> =========== db log file ==========\n>> LOG:  unrecognized configuration parameter \"unix_socket_directories\"\n>> in file\n>> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n>> line 576\n>> FATAL:  configuration file\n>> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n>> contains errors\n>> $\n>> \n>> And we have the errors on the other branches with a temp(?) directory.\n> \n> \n> looks like it's picking up the wrong perl libraries. Please show us the\n> output of\n> \n> grep -v secret\n> /home/pgbuildfarm/buildroot/REL9_2_STABLE/$animal.lastrun-logs/web-txn.data\n> \n> \n> cheers\n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n$ grep -v secret web-txn.data\n$changed_this_run = '';\n$changed_since_success = '';\n$branch = 'REL9_2_STABLE';\n$status = 1;\n$stage = 'StartDb-C:1';\n$animal = 'gerenuk';\n$ts = 1640276469;\n$log_data = 'Last file mtime in snapshot: Wed Dec 15 23:00:28 2021 GMT\n===================================================\nwaiting for server to start.... stopped waiting\npg_ctl: could not start server\nExamine the log output.\n=========== db log file ==========\nLOG: unrecognized configuration parameter \"unix_socket_directories\" in \nfile \n\"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\" \nline 576\nFATAL: configuration file \n\"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\" \ncontains errors\n';\n$confsum = 'This file was created by PostgreSQL configure 9.2.24, which \nwas\ngenerated by GNU Autoconf 2.63. Invocation command line was\n\n $ ./configure --enable-cassert --enable-debug --enable-nls \n--enable-tap-tests \\\\\n --with-perl \n--prefix=/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst \\\\\n --with-pgport=5700 \n--cache-file=/home/pgbuildfarm/buildroot/accache-gerenuk/config-REL9_2_STABLE.cache\n\n\nhostname = borg.lerctr.org\nuname -m = amd64\nuname -r = 14.0-CURRENT\nuname -s = FreeBSD\nuname -v = FreeBSD 14.0-CURRENT #26 main-n251898-acdc1de369a: Wed Dec 22 \n18:11:08 CST 2021 \nroot@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/LER-MINIMAL\n\n/usr/bin/uname -p = amd64\n\n\nPATH: /home/ler/.asdf/shims\nPATH: /home/ler/.asdf/bin\nPATH: /sbin\nPATH: /bin\nPATH: /usr/sbin\nPATH: /usr/bin\nPATH: /usr/local/sbin\nPATH: /usr/local/bin\nPATH: /home/ler/bin\nPATH: /home/ler/go/bin\nPATH: /usr/local/opt/terraform@0.12/bin\nPATH: /home/ler/bin\n\n\n\n========================================================\n$Script_Config = {\n \\'alerts\\' => {},\n \\'animal\\' => \\'gerenuk\\',\n \\'base_port\\' => 5678,\n \\'bf_perl_version\\' => \\'5.32.1\\',\n \\'build_env\\' => {\n \\'INCLUDES\\' => \n\\'-I/usr/local/include\\',\n \\'LDFLAGS\\' => \\'-L/usr/local/lib\\'\n },\n \\'build_root\\' => \\'/home/pgbuildfarm/buildroot\\',\n \\'ccache_failure_remove\\' => undef,\n \\'config\\' => [],\n \\'config_env\\' => {\n \\'CFLAGS\\' => \\'-O2 -fPIC\\'\n },\n \\'config_opts\\' => [\n \\'--enable-cassert\\',\n \\'--enable-debug\\',\n \\'--enable-nls\\',\n \\'--enable-tap-tests\\',\n \\'--with-perl\\'\n ],\n \\'core_file_glob\\' => \\'*.core*\\',\n \\'extra_config\\' => {\n \\'DEFAULT\\' => [\n \\'log_line_prefix \n= \\\\\\'%m [%p:%l] %q%a \\\\\\'\\',\n \\'log_connections \n= \\\\\\'true\\\\\\'\\',\n \n\\'log_disconnections = \\\\\\'true\\\\\\'\\',\n \\'log_statement = \n\\\\\\'all\\\\\\'\\',\n \\'fsync = off\\'\n ]\n },\n \\'force_every\\' => {},\n \\'git_ignore_mirror_failure\\' => 1,\n \\'git_keep_mirror\\' => 1,\n \\'git_use_workdirs\\' => 1,\n \\'invocation_args\\' => [\n \\'--config\\',\n \n\\'/home/pgbuildfarm/conf/gerenuk.conf\\',\n \\'--test\\',\n \\'REL9_2_STABLE\\'\n ],\n \\'keep_error_builds\\' => 0,\n \\'locales\\' => [\n \\'C\\'\n ],\n \\'mail_events\\' => {\n \\'all\\' => [],\n \\'change\\' => [\n \\'ler@lerctr.org\\'\n ],\n \\'fail\\' => [\n \\'ler@lerctr.org\\'\n ],\n \\'green\\' => [\n \\'ler@lerctr.org\\'\n ]\n },\n \\'make\\' => \\'gmake\\',\n \\'make_jobs\\' => 10,\n \\'module_versions\\' => {\n \\'PGBuild::Log\\' => \n\\'REL_13.1\\',\n \n\\'PGBuild::Modules::TestDecoding\\' => \\'REL_13.1\\',\n \n\\'PGBuild::Modules::TestUpgrade\\' => \\'REL_13.1\\',\n \\'PGBuild::Options\\' => \n\\'REL_13.1\\',\n \\'PGBuild::SCM\\' => \n\\'REL_13.1\\',\n \\'PGBuild::Utils\\' => \n\\'REL_13.1\\',\n \\'PGBuild::WebTxn\\' => \n\\'REL_13.1\\'\n },\n \\'modules\\' => [\n \\'TestUpgrade\\',\n \\'TestDecoding\\'\n ],\n \\'optional_steps\\' => {},\n \\'orig_env\\' => {\n \\'BF_CONF_BRANCHES\\' => \n\\'REL9_2_STABLE,REL9_3_STABLE,REL9_4_STABLE,REL9_5_STABLE,REL9_6_STABLE\\',\n \\'EDITOR\\' => \\'xxxxxx\\',\n \\'ENV\\' => \\'xxxxxx\\',\n \\'HOME\\' => \\'/home/pgbuildfarm\\',\n \\'LANG\\' => \\'xxxxxx\\',\n \\'LOGNAME\\' => \\'pgbuildfarm\\',\n \\'MAIL\\' => \\'xxxxxx\\',\n \\'PAGER\\' => \\'xxxxxx\\',\n \\'PATH\\' => \n\\'/home/ler/.asdf/shims:/home/ler/.asdf/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/ler/bin:/home/ler/go/bin:/usr/local/opt/terraform@0.12/bin:/home/ler/bin\\',\n \\'PWD\\' => \\'xxxxxx\\',\n \\'SHELL\\' => \\'/bin/sh\\',\n \\'SUDO_COMMAND\\' => \\'xxxxxx\\',\n \\'SUDO_GID\\' => \\'xxxxxx\\',\n \\'SUDO_UID\\' => \\'xxxxxx\\',\n \\'SUDO_USER\\' => \\'xxxxxx\\',\n \\'TERM\\' => \\'xxxxxx\\',\n \\'USER\\' => \\'pgbuildfarm\\'\n },\n \\'scm\\' => \\'git\\',\n \\'scm_url\\' => undef,\n \\'scmrepo\\' => undef,\n \\'script_version\\' => \\'REL_13.1\\',\n \\'steps_completed\\' => [\n \\'SCM-checkout\\',\n \\'Configure\\',\n \\'Make\\',\n \\'Check\\',\n \\'Contrib\\',\n \\'Install\\',\n \\'ContribInstall\\',\n \\'pg_upgradeCheck\\',\n \\'MiscCheck\\',\n \\'Initdb-C\\'\n ],\n \\'tar_log_cmd\\' => undef,\n \\'target\\' => \n\\'https://buildfarm.postgresql.org/cgi-bin/pgstatus.pl\\',\n \\'trigger_exclude\\' => qr/^doc\\\\/|\\\\.po$/,\n \\'trigger_include\\' => undef,\n \\'upgrade_target\\' => \n\\'https://buildfarm.postgresql.org/cgi-bin/upgrade.pl\\',\n \\'use_default_ccache_dir\\' => 0,\n \\'use_git_cvsserver\\' => undef,\n \\'use_vpath\\' => undef,\n \\'using_msvc\\' => undef,\n \\'wait_timeout\\' => undef\n };\n';\n$target = 'https://buildfarm.postgresql.org/cgi-bin/pgstatus.pl';\n$verbose = 1;\n$ pwd\n/home/pgbuildfarm/buildroot/REL9_2_STABLE/gerenuk.lastrun-logs\n$\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 23 Dec 2021 11:42:01 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "\nOn 12/23/21 12:23, Andrew Dunstan wrote:\n> On 12/23/21 11:27, Larry Rosenman wrote:\n>>> For the 9.2 error, try setting this in the config_env stanza:\n>>>\n>>>\n>>>     CFLAGS => '-O2 -fPIC',\n>>>\n>>>\n>>>\n>> That got us further, but it dies on startdb:\n>> $ cat startdb-C-1.log\n>> waiting for server to start.... stopped waiting\n>> pg_ctl: could not start server\n>> Examine the log output.\n>> =========== db log file ==========\n>> LOG:  unrecognized configuration parameter \"unix_socket_directories\"\n>> in file\n>> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n>> line 576\n>> FATAL:  configuration file\n>> \"/home/pgbuildfarm/buildroot/REL9_2_STABLE/inst/data-C/postgresql.conf\"\n>> contains errors\n>> $\n>>\n>> And we have the errors on the other branches with a temp(?) directory.\n>\n> looks like it's picking up the wrong perl libraries. Please show us the\n> output of\n>\n> grep -v secret /home/pgbuildfarm/buildroot/REL9_2_STABLE/$animal.lastrun-logs/web-txn.data\n>\n>\n\n\nOh, you need to be building with the buildfarm client's git tip, not the\nreleased code. Alternatively, apply this patch:\n\n\nhttps://github.com/PGBuildFarm/client-code/commit/75c762ba74fdec96ebf6c2433d61d3eeead825c3\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 12:58:49 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Buildfarm support for older versions" }, { "msg_contents": "On 12/23/2021 11:58 am, Andrew Dunstan wrote:\n\n> \n> Oh, you need to be building with the buildfarm client's git tip, not \n> the\n> released code. Alternatively, apply this patch:\n> \n> \n> https://github.com/PGBuildFarm/client-code/commit/75c762ba74fdec96ebf6c2433d61d3eeead825c3\n> \n> \n\nwith git tip:\noutput attached.\n\nI can give access if needed.\n\n$ grep -v secret conf/gerenuk.conf\n# -*-perl-*- hey - emacs - this is a perl file\n\n=comment\n\nCopyright (c) 2003-2010, Andrew Dunstan\n\nSee accompanying License file for license details\n\n=cut\n\npackage PGBuild;\n\nuse strict;\n\nuse warnings FATAL => 'qw';\n\nuse vars qw(%conf);\n\n# use vars qw($VERSION); $VERSION = 'REL_4.19';\n\nmy $branch;\n{\n no warnings qw(once);\n $branch = $main::branch;\n}\n\n%conf =(\n\n # identity\n animal => \"gerenuk\",\n\n # source code\n scm => 'git', # or 'cvs'\n git_keep_mirror => 1, # manage a git mirror in the build root\n git_ignore_mirror_failure => 1, # ignore failures in fetching to \nmirror\n\n # use symlinked git repo from non-HEAD branches,\n # like git-new-workdir does\n git_use_workdirs => 1,\n\n scmrepo => undef, # default is community repo for either type\n scm_url => undef, # webref for diffs on server - use default for \ncommunity\n # git_reference => undef, # for --reference on git repo\n # cvsmethod => 'update', # or 'export'\n use_git_cvsserver => undef, # or 'true' if repo is a git cvsserver\n\n # external commands and control\n make => 'gmake', # or gmake if required. can include path if \nnecessary.\n make_jobs => 10, # >1 for parallel \"make\" and \"make check\" steps\n tar_log_cmd => undef, # default is \"tar -z -cf runlogs.tgz *.log\"\n # replacement must have the same effect\n\n # max time in seconds allowed for a single branch run\n # undef/0 means unlimited\n wait_timeout => undef,\n\n # where and how to build\n # must be absolute, can be either Unix or Windows style for MSVC\n # undef means default, buildroot dir in script directory\n build_root => '/home/pgbuildfarm/buildroot', , # or \n'/path/to/buildroot',\n use_vpath => undef, # set true to do vpath builds\n\n # path to directory with auxiliary web script\n # if relative, the must be relative to buildroot/branch\n # Now only used on older Msys installations\n # aux_path => \"../..\",\n\n keep_error_builds => 0,\n core_file_glob => \"*.core*\", # Linux style, use \"*.core\" for BSD\n\n # where to report status\n target => \"https://buildfarm.postgresql.org/cgi-bin/pgstatus.pl\",\n\n # where to report change in OS version or compiler version\n upgrade_target => \n\"https://buildfarm.postgresql.org/cgi-bin/upgrade.pl\",\n\n # change this to a true value if using MSVC, in which case also\n # see MSVC section below\n\n using_msvc => undef,\n\n # if force_every is a scalar it will be used on all branches, like \nthis\n # for legacy reasons:\n # force_every => 336 , # max hours between builds, undef or 0 = \nunforced\n # we now prefer it to be a hash with branch names as the keys, like \nthis\n #\n # this setting should be kept conservatively high, or not used at \nall -\n # for the most part it's best to let the script decide if something\n # has changed that requires a new run for the branch.\n #\n # an entry with a name of 'default' matches any branch not named\n force_every => {\n\n # HEAD => 48,\n # REL8_3_STABLE => 72,\n # default => 168,\n },\n\n # alerts are triggered if the server doesn't see a build on a branch \nafter\n # this many hours, and then sent out every so often,\n\n alerts => {\n\n #HEAD => { alert_after => 72, alert_every => 24 },\n # REL8_1_STABLE => { alert_after => 240, alert_every => 48 },\n },\n\n # include / exclude patterns for files that trigger a build\n # if both are specified then they are both applied as filters\n # undef means don't ignore anything.\n # exclude qr[^doc/|\\.po$] to ignore changes to docs and po files\n # (recommended)\n # undef means null filter.\n trigger_exclude => qr[^doc/|\\.po$],\n trigger_include => undef,\n\n # settings for mail notices - default to notifying nobody\n # these lists contain addresses to be notified\n # must be complete email addresses, as the email is sent from the \nserver\n\n mail_events =>{\n all => [], # unconditional\n fail => [\"ler\\@lerctr.org\"], # if this build fails\n change => [\"ler\\@lerctr.org\"], # if this build causes a state \nchange\n green => [\"ler\\@lerctr.org\"], # if this build causes a state \nchange to/from OK\n },\n\n # if this flag is set and ccache is used, an unsuccessful run will \nresult\n # in the removal of the ccache directory (and you need to make sure \nthat\n # its parent is writable). The default is off - ccache should be \nable to\n # handle failures, although there have been suspicions in the past \nthat\n # it's not quite as reliable as we'd want, and thus we have this \noption.\n\n ccache_failure_remove => undef,\n\n # set this if you want to use ccache with the default ccache \ndirectory\n # location, effectively $buildroot/ccache-$animal.\n\n use_default_ccache_dir => 0,\n\n # env settings to apply within build/report process\n # these settings will be seen by all the processes, including the\n # configure process.\n\n build_env =>{\n\n # use a dedicated cache for the build farm. this should give us\n # very high hit rates and slightly faster cache searching.\n #\n # only set this if you want to set your own path for the ccache\n # directory\n #CCACHE_DIR => \"/home/pgbuildfarm/misc/ccache\",\n\n ### use these settings for CYGWIN\n # CYGWIN => 'server',\n # MAX_CONNECTIONS => '3',\n\n ### set this if you need a proxy setting for the\n # outbound web transaction that reports the results\n # BF_PROXY => 'http://my.proxy.server:portnum/',\n\n # see below for MSVC settings\n\n # possibly set this to something high if you get pg_ctl failures\n # default is 120\n # PGCTLTIMEOUT => '120',\n INCLUDES=>\"-I/usr/local/include\",\n LDFLAGS=>\"-L/usr/local/lib\",\n\n },\n\n # env settings to pass to configure. These settings will only be \nseen by\n # configure.\n config_env =>{\n\n # comment out if not using ccache\n # CC => 'ccache clang',\n CFLAGS => '-O2 -fPIC',\n },\n\n # don't use --prefix or --with-pgport here\n # they are set up by the script\n # per-branch config can be done here or\n # more simply by using the examples below.\n # (default ldap flag is below because it's not supported in all \nbranches)\n\n # see below for MSVC config\n\n config_opts =>[\n qw(\n --enable-cassert\n --enable-debug\n --enable-nls\n --enable-tap-tests\n --with-perl\n )\n ],\n\n # per-branch contents of extra config for check stages.\n # each branch has an array of setting lines (no \\n required)\n # a DEFAULT entry is used for all branches, before any\n # branch-specific settings.\n extra_config =>{\n DEFAULT => [\n q(log_line_prefix = '%m [%p:%l] %q%a '),\n \"log_connections = 'true'\",\n \"log_disconnections = 'true'\",\n \"log_statement = 'all'\",\n \"fsync = off\"\n ],\n },\n\n optional_steps =>{\n\n # which optional steps to run and when to run them\n # valid keys are: branches, dow, min_hours_since, min_hour, \nmax_hour\n # find_typedefs => { branches => ['HEAD'], dow => [1,4],\n #\t \t\t\t min_hours_since => 25 },\n # build_docs => {min_hours_since => 24},\n },\n\n # locales to test\n\n locales => [qw( C )],\n\n # port number actually used will be based on this param and the \nbranch,\n # so we ensure they don't collide\n\n base_port => 5678,\n\n modules => [qw(TestUpgrade TestDecoding)],\n\n #\n);\n\nif ($branch eq 'global')\n{\n\n $conf{branches_to_build} = [qw( REL9_2_STABLE REL9_3_STABLE \nREL9_4_STABLE\n REL9_5_STABLE REL9_6_STABLE)]\n\n # or 'HEAD_PLUS_LATEST' or 'HEAD_PLUS_LATEST2'\n # or [qw( HEAD RELx_y_STABLE etc )]\n\n # set this if you are running multiple animals and want them \ncoordinated\n\n # $conf{global_lock_dir} = '/path/to/lockdir';\n\n}\n\n# MSVC setup\n\nif ($conf{using_msvc})\n{\n\n # all this stuff is to support MSVC builds - it's literally what\n # a VS Command Prompt sets for me.\n # make sure it's what your sets for you. There can be subtle \ndifferences.\n # Note: we need to set here whatever would be set in buildenv.bat, \nas\n # we aren't going to write that file. This supercedes it. In\n # particular, the PATH possibly needs to include the path to perl, \nbison,\n # flex etc., as well as CVS if that's not in the path.\n\n my %extra_buildenv =(\n\n VSINSTALLDIR => 'C:\\Program Files\\Microsoft Visual Studio 8',\n VCINSTALLDIR => 'C:\\Program Files\\Microsoft Visual Studio 8\\VC',\n VS80COMNTOOLS =>\n 'C:\\Program Files\\Microsoft Visual Studio 8\\Common7\\Tools',\n FrameworkDir => 'C:\\WINDOWS\\Microsoft.NET\\Framework',\n FrameworkVersion => 'v2.0.50727',\n FrameworkSDKDir =>'C:\\Program Files\\Microsoft Visual Studio \n8\\SDK\\v2.0',\n DevEnvDir => 'C:\\Program Files\\Microsoft Visual Studio \n8\\Common7\\IDE',\n\n PATH => join(';',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\Common7\\IDE',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\BIN',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\Common7\\Tools',\n 'C:\\Program Files\\Microsoft Visual Studio \n8\\Common7\\Tools\\bin',\n 'C:\\Program Files\\Microsoft Visual Studio \n8\\VC\\PlatformSDK\\bin',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\SDK\\v2.0\\bin',\n 'C:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\VCPackages',\n 'C:\\Perl\\Bin',\n 'c:\\prog\\pgdepend\\bin',\n $ENV{PATH}),\n INCLUDE => join(';',\n 'C:\\Program Files\\Microsoft Visual Studio \n8\\VC\\ATLMFC\\INCLUDE',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\INCLUDE',\n 'C:\\Program Files\\Microsoft Visual Studio \n8\\VC\\PlatformSDK\\include',\n 'C:\\Program Files\\Microsoft Visual Studio \n8\\SDK\\v2.0\\include',\n $ENV{INCLUDE}),\n\n LIB => join(';',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\ATLMFC\\LIB',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\LIB'\n .'C:\\Program Files\\Microsoft Visual Studio \n8\\VC\\PlatformSDK\\lib',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\SDK\\v2.0\\lib'\n .$ENV{LIB}),\n\n LIBPATH => join(';',\n 'C:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727',\n 'C:\\Program Files\\Microsoft Visual Studio 8\\VC\\ATLMFC\\LIB'),\n );\n\n %{$conf{build_env}} = (%{$conf{build_env}}, %extra_buildenv);\n\n # MSVC needs a somewhat different style of config opts (why??)\n # What we write here will be literally (via Data::Dumper) put into\n # the config.pl file for the MSVC build.\n\n $conf{config_opts} ={\n asserts=>1, # --enable-cassert\n integer_datetimes=>1, # --enable-integer-datetimes\n nls=>undef, # --enable-nls=<path>\n tcl=>'c:\\tcl', # --with-tcl=<path>\n perl=>'c:\\perl', # --with-perl=<path>\n python=>'c:\\python25', # --with-python=<path>\n krb5=> undef, # --with-krb5=<path>\n ldap=>0, # --with-ldap\n openssl=> undef, # --with-ssl=<path>\n xml=> undef, # --with-libxml=<path>\n xslt=> undef, # --with-libxslt=<path>,\n iconv=> undef, # path to iconv library\n zlib=> undef, # --with-zlib=<path>\n };\n\n}\n\n##################################\n#\n# examples of per branch processing\n# tailor as required for your site.\n#\n##################################\nif ($branch eq 'HEAD')\n{\n\n #\tpush(@{$conf{config_opts}},\"--enable-depend\");\n}\nelsif ($branch =~ /^REL7_/)\n{\n\n # push(@{$conf{config_opts}},\"--without-tk\");\n}\n\n1;\n$\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106", "msg_date": "Thu, 23 Dec 2021 19:09:35 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Buildfarm support for older versions" } ]
[ { "msg_contents": "Hello,\n\nI realized that the output of \"\\df+ func_name\" has a formatting problem\nwhen a\nlot of arguments are used. The field 'Arguments data types' gets very long\nand\ndestroys the whole formatting in the console. The field 'Source code' is\nmost of\nthe time multi-line and I thought that the output for the field 'Arguments\ndata\ntypes' could also be multiline with one line for each argument.\n\nRegards,\nFlorian Koch\n\nHello,I realized that the output of \"\\df+ func_name\" has a formatting problem when alot of arguments are used. The field 'Arguments data types' gets very long anddestroys the whole formatting in the console. The field 'Source code' is most ofthe time multi-line and I thought that the output for the field 'Arguments data types' could also be multiline with one line for each argument.Regards,Florian Koch", "msg_date": "Wed, 15 Dec 2021 20:58:02 +0100", "msg_from": "Florian Koch <florian.murat.koch@gmail.com>", "msg_from_op": true, "msg_subject": "psql format output" }, { "msg_contents": "Hi\n\nst 15. 12. 2021 v 21:16 odesílatel Florian Koch <\nflorian.murat.koch@gmail.com> napsal:\n\n> Hello,\n>\n> I realized that the output of \"\\df+ func_name\" has a formatting problem\n> when a\n> lot of arguments are used. The field 'Arguments data types' gets very long\n> and\n> destroys the whole formatting in the console. The field 'Source code' is\n> most of\n> the time multi-line and I thought that the output for the field 'Arguments\n> data\n> types' could also be multiline with one line for each argument.\n>\n\ntry to use pager\n\nhttps://github.com/okbob/pspg\n\nRegards\n\nPavel\n\n\n> Regards,\n> Florian Koch\n>\n\nHist 15. 12. 2021 v 21:16 odesílatel Florian Koch <florian.murat.koch@gmail.com> napsal:Hello,I realized that the output of \"\\df+ func_name\" has a formatting problem when alot of arguments are used. The field 'Arguments data types' gets very long anddestroys the whole formatting in the console. The field 'Source code' is most ofthe time multi-line and I thought that the output for the field 'Arguments data types' could also be multiline with one line for each argument.try to use pager https://github.com/okbob/pspgRegardsPavelRegards,Florian Koch", "msg_date": "Wed, 15 Dec 2021 21:26:37 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql format output" }, { "msg_contents": "On 15.12.21 20:58, Florian Koch wrote:\n> I realized that the output of \"\\df+ func_name\" has a formatting problem \n> when a\n> lot of arguments are used. The field 'Arguments data types' gets very \n> long and\n> destroys the whole formatting in the console. The field 'Source code' is \n> most of\n> the time multi-line and I thought that the output for the field \n> 'Arguments data\n> types' could also be multiline with one line for each argument.\n\nThat's a reasonable idea. I wonder if it would work in general. If \nsomeone had a C function (so no source code) with three arguments, they \nmight be annoyed if it now displayed as three lines by default.\n\n\n", "msg_date": "Fri, 17 Dec 2021 11:08:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: psql format output" }, { "msg_contents": "On Fri, Dec 17, 2021 at 5:08 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> That's a reasonable idea. I wonder if it would work in general. If\n> someone had a C function (so no source code) with three arguments, they\n> might be annoyed if it now displayed as three lines by default.\n\nThe problem I see is that each of those three lines would probably wrap.\n\nFor example, consider:\n\nrhaas=# \\df+ pg_copy_logical_replication_slot\n\nIn an 80-column window, the first non-header line of output looks like this:\n\n pg_catalog | pg_copy_logical_replication_slot | record | src_slot_nam\n\nSince we don't even fit the whole parameter name and data type in\nthere, never mind the rest of the columns, the proposed solution can't\nhelp here. Each of the three output lines are over 300 characters.\n\nWhen I full-screen my terminal window, it is 254 characters wide. So\nif I were working full screen, then this proposal would cause that\noutput not to wrap when it otherwise would have done so. But if I were\nworking with a normal size window or even somewhat wider than normal,\nit would just give me multiple wrapped lines.\n\nIt's hard to make any general judgment about how wide people's\nterminal windows are likely to be, but it is my opinion that the root\nof the problem is that \\df+ just wants to display a whole lot of stuff\n- and as hackers add more function properties in the future, they're\nlikely to get added in here as well. This output format doesn't scale\nnicely for that kind of thing, but it's unclear to me what would be\nany better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 09:44:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql format output" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's hard to make any general judgment about how wide people's\n> terminal windows are likely to be, but it is my opinion that the root\n> of the problem is that \\df+ just wants to display a whole lot of stuff\n> - and as hackers add more function properties in the future, they're\n> likely to get added in here as well. This output format doesn't scale\n> nicely for that kind of thing, but it's unclear to me what would be\n> any better.\n\nI think the complaint is that even with \\x mode, which fixes most\ncomplaints of this sort, the arguments are still too wide:\n\n-[ RECORD 1 ]-------+-----------------------------------------------------------------------------------------------------------\nSchema | pg_catalog\nName | pg_copy_logical_replication_slot\nResult data type | record\nArgument data types | src_slot_name name, dst_slot_name name, OUT slot_name name, OUT lsn pg_lsn\nType | func\nVolatility | volatile\nParallel | unsafe\nOwner | postgres\nSecurity | invoker\nAccess privileges | \nLanguage | internal\nSource code | pg_copy_logical_replication_slot_c\nDescription | copy a logical replication slot\n\nThe OP wants to fix that by inserting newlines in the \"Argument data\ntypes\" column, which'd help, but it seems to me to be mostly a kluge.\nThat's prejudging a lot about how the output will be displayed.\nA more SQL-ish way to do things would be to turn the argument items\ninto a set of rows. I don't quite see how to make that work here,\nbut maybe I'm just undercaffeinated as yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:20:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql format output" }, { "msg_contents": "On 2021-Dec-20, Tom Lane wrote:\n\n> -[ RECORD 1 ]-------+-----------------------------------------------------------------------------------------------------------\n> Schema | pg_catalog\n> Name | pg_copy_logical_replication_slot\n> Result data type | record\n> Argument data types | src_slot_name name, dst_slot_name name, OUT slot_name name, OUT lsn pg_lsn\n\n> The OP wants to fix that by inserting newlines in the \"Argument data\n> types\" column, which'd help, but it seems to me to be mostly a kluge.\n> That's prejudging a lot about how the output will be displayed.\n> A more SQL-ish way to do things would be to turn the argument items\n> into a set of rows. I don't quite see how to make that work here,\n> but maybe I'm just undercaffeinated as yet.\n\nMaybe one way to improve on this is to have the server inject optional\nline break markers (perhaps U+FEFF) that the client chooses whether or\nnot to convert into a physical line break, based on line length. So in\n\\df we would use a query that emits such a marker after every comma and\nthen psql measures line width and crams as many items in each line as\nwill fit. In the above example you would still have a single line for\narguments, because your terminal seems wide enough, but if you use a\nsmaller terminal then it'd be broken across several.\n\npsql controls what happens because it owns the \\df query anyway. It\ncould be something simple like\n\npg_catalog.add_psql_linebreaks(pg_catalog.pg_get_function_arguments(p.oid))\n\nif we don't want to intrude into pg_get_function_arguments itself.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n", "msg_date": "Mon, 20 Dec 2021 12:36:15 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: psql format output" } ]
[ { "msg_contents": "Hello all,\r\n\r\nlibpq currently supports server certificates with a single IP address\r\nin the Common Name. It's fairly brittle; as far as I can tell, the\r\nsingle name you choose has to match the client's address exactly.\r\n\r\nAttached is a patch for libpq to support IP addresses in the server's\r\nSubject Alternative Names, which would allow admins to issue certs for\r\nmultiple IP addresses, both IPv4 and IPv6, and mix them with\r\nalternative DNS hostnames. These addresses are compared bytewise\r\ninstead of stringwise, so the client can contact the server via\r\nalternative spellings of the same IP address.\r\n\r\nThis patch arose because I was writing tests for the NSS implementation\r\nthat used a server cert with both DNS names and IP addresses, and then\r\nthey failed when I ran those tests against the OpenSSL implementation.\r\nNSS supports this functionality natively. Anecdotally, I've heard from\r\nat least one client group who is utilizing IP-based certificates in\r\ntheir cloud deployments. It seems uncommon but still useful.\r\n\r\nThere are two open questions I have; they're based on NSS\r\nimplementation details that I did not port here:\r\n\r\n- NSS allows an IPv4 SAN to match an IPv6 mapping of that same address,\r\n and vice-versa. I chose not to implement that behavior, figuring it\r\n is easy enough for people to issue a certificate with both addresses.\r\n Is that okay?\r\n\r\n- If a certificate contains only iPAddress SANs, and none of them\r\n match, I fall back to check the certificate Common Name. OpenSSL will\r\n not do this (its X509_check_ip considers only SANs). NSS will only do\r\n this if the client's address is itself a DNS name. The spec says that\r\n we can't fall back to Common Name if the SANs contain any DNS\r\n entries, but it's silent on the subject of IP addresses. What should\r\n the behavior be?\r\n\r\nThe patchset roadmap:\r\n\r\n- 0001 moves inet_net_pton() to src/port, since libpq will need it.\r\n- 0002 implements the new functionality and adds tests.\r\n\r\nWDYT?\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Thu, 16 Dec 2021 01:13:57 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "[PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Thu, 16 Dec 2021 01:13:57 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> This patch arose because I was writing tests for the NSS implementation\n> that used a server cert with both DNS names and IP addresses, and then\n> they failed when I ran those tests against the OpenSSL implementation.\n> NSS supports this functionality natively. Anecdotally, I've heard from\n> at least one client group who is utilizing IP-based certificates in\n> their cloud deployments. It seems uncommon but still useful.\n> \n> There are two open questions I have; they're based on NSS\n> implementation details that I did not port here:\n> \n> - NSS allows an IPv4 SAN to match an IPv6 mapping of that same address,\n> and vice-versa. I chose not to implement that behavior, figuring it\n> is easy enough for people to issue a certificate with both addresses.\n> Is that okay?\n\n> - If a certificate contains only iPAddress SANs, and none of them\n> match, I fall back to check the certificate Common Name. OpenSSL will\n> not do this (its X509_check_ip considers only SANs). NSS will only do\n> this if the client's address is itself a DNS name. The spec says that\n> we can't fall back to Common Name if the SANs contain any DNS\n> entries, but it's silent on the subject of IP addresses. What should\n> the behavior be?\n> \n> The patchset roadmap:\n> \n> - 0001 moves inet_net_pton() to src/port, since libpq will need it.\n> - 0002 implements the new functionality and adds tests.\n> \n> WDYT?\n\nIn RFC2828 and 6125,\n\n> In some cases, the URI is specified as an IP address rather than a\n> hostname. In this case, the iPAddress subjectAltName must be present\n> in the certificate and must exactly match the IP in the URI.\n\nIt seems like saying that we must search for iPAddress and mustn't use\nCN nor dNSName if the client connected using IP address. Otherwise, if\nthe host name is a domain name, we use only dNSName if present, and\nuse CN otherwise. That behavior seems agreeing to what you wrote as\nNSS's behavior. That being said it seems to me we should preserve\nthat behavior at least for OpenSSL as an established behavior.\n\nIn short, I think the current behavior of the patch is the direction\nwe would go but some documentation is may be needed.\n\nI'm not sure about ipv4 comptible addresses. However, I think we can\nidentify ipv4 compatible address easily.\n\n+\t\t * pg_inet_net_pton() will accept CIDR masks, which we don't want to\n+\t\t * match, so skip the comparison if the host string contains a slash.\n+\t\t */\n+\t\tif (!strchr(host, '/')\n+\t\t\t&& pg_inet_net_pton(PGSQL_AF_INET6, host, addr, -1) == 128)\n\nIf a cidr is given, pg_inet_net_pton returns a number less than 128 so\nwe don't need to check '/' explicity? (I'm not sure '/128' is\nsensible but doesn't harm..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 16 Dec 2021 14:54:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "\nOn 12/15/21 20:13, Jacob Champion wrote:\n> Hello all,\n>\n> libpq currently supports server certificates with a single IP address\n> in the Common Name. It's fairly brittle; as far as I can tell, the\n> single name you choose has to match the client's address exactly.\n>\n> Attached is a patch for libpq to support IP addresses in the server's\n> Subject Alternative Names, which would allow admins to issue certs for\n> multiple IP addresses, both IPv4 and IPv6, and mix them with\n> alternative DNS hostnames. These addresses are compared bytewise\n> instead of stringwise, so the client can contact the server via\n> alternative spellings of the same IP address.\n\n\nGood job, this is certainly going to be useful.\n\n\n\n>\n> This patch arose because I was writing tests for the NSS implementation\n> that used a server cert with both DNS names and IP addresses, and then\n> they failed when I ran those tests against the OpenSSL implementation.\n> NSS supports this functionality natively. Anecdotally, I've heard from\n> at least one client group who is utilizing IP-based certificates in\n> their cloud deployments. It seems uncommon but still useful.\n>\n> There are two open questions I have; they're based on NSS\n> implementation details that I did not port here:\n>\n> - NSS allows an IPv4 SAN to match an IPv6 mapping of that same address,\n> and vice-versa. I chose not to implement that behavior, figuring it\n> is easy enough for people to issue a certificate with both addresses.\n> Is that okay?\n\n\nSure.\n\n\n>\n> - If a certificate contains only iPAddress SANs, and none of them\n> match, I fall back to check the certificate Common Name. OpenSSL will\n> not do this (its X509_check_ip considers only SANs). NSS will only do\n> this if the client's address is itself a DNS name. The spec says that\n> we can't fall back to Common Name if the SANs contain any DNS\n> entries, but it's silent on the subject of IP addresses. What should\n> the behavior be?\n\n\nI don't think we should fall back on the CN. It would seem quite odd to\ndo so for IP addresses but not for DNS names.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 10:50:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2021-12-16 at 14:54 +0900, Kyotaro Horiguchi wrote:\r\n> In RFC2828 and 6125,\r\n> \r\n> > In some cases, the URI is specified as an IP address rather than a\r\n> > hostname. In this case, the iPAddress subjectAltName must be present\r\n> > in the certificate and must exactly match the IP in the URI.\r\n\r\nAh, right, I misremembered. Disregard my statement that the spec is\r\n\"silent on the subject\", sorry.\r\n\r\n> It seems like saying that we must search for iPAddress and mustn't use\r\n> CN nor dNSName if the client connected using IP address. Otherwise, if\r\n> the host name is a domain name, we use only dNSName if present, and\r\n> use CN otherwise. That behavior seems agreeing to what you wrote as\r\n> NSS's behavior.\r\n\r\nNSS departs slightly from the spec and will additionally try to match\r\nan IP address against the CN, but only if there are no iPAddresses in\r\nthe SAN. It roughly matches the logic for DNS names.\r\n\r\nHere's the description of the NSS behavior and some of the reasoning\r\nbehind it, quoted from a developer on Bugzilla [1]:\r\n\r\n> Elsewhere in RFC 2818, it says \r\n> \r\n> If a subjectAltName extension of type dNSName is present, that MUST\r\n> be used as the identity. Otherwise, the (most specific) Common Name\r\n> field in the Subject field of the certificate MUST be used. \r\n> \r\n> Notice that this section is not conditioned upon the URI being a hostname\r\n> and not an IP address. So this statement conflicts with the one cited \r\n> above. \r\n> \r\n> I implemented this policy:\r\n> \r\n> if the URI contains a host name\r\n> if the subject alt name is present and has one or more DNS names\r\n> use the DNS names in that extension as the server identity\r\n> else\r\n> use the subject common name as the server identity\r\n> else if the URI contains an IP address\r\n> if the subject alt name is present and has one or more IP addresses\r\n> use the IP addresses in that extension as the server identity\r\n> else\r\n> compare the URI IP address string with the subject common name.\r\n\r\nIt sounds like both you and Andrew might be comfortable with that same\r\nbehavior? I think it looks like a sane solution, so I'll implement that\r\nand we can see what it looks like. (My work on this will be paused over\r\nthe end-of-year holidays.)\r\n\r\n> That being said it seems to me we should preserve\r\n> that behavior at least for OpenSSL as an established behavior.\r\n\r\nThat part is interesting. I'll talk more about that in my reply to\r\nAndrew.\r\n\r\n> In short, I think the current behavior of the patch is the direction\r\n> we would go but some documentation is may be needed.\r\n\r\nGreat!\r\n\r\n> I'm not sure about ipv4 comptible addresses. However, I think we can\r\n> identify ipv4 compatible address easily.\r\n\r\nYeah, it would probably not be a difficult feature to add later.\r\n\r\n> + * pg_inet_net_pton() will accept CIDR masks, which we don't want to\r\n> + * match, so skip the comparison if the host string contains a slash.\r\n> + */\r\n> + if (!strchr(host, '/')\r\n> + && pg_inet_net_pton(PGSQL_AF_INET6, host, addr, -1) == 128)\r\n> \r\n> If a cidr is given, pg_inet_net_pton returns a number less than 128 so\r\n> we don't need to check '/' explicity? (I'm not sure '/128' is\r\n> sensible but doesn't harm..)\r\n\r\nPersonally I think that, if someone wants your libpq to connect to a\r\nserver with a hostname of \"some:ipv6::address/128\", then they are\r\ntrying to pull something (evading a poorly coded blocklist, perhaps?)\r\nand we should not allow that to match an IP. Thoughts?\r\n\r\nThanks for the review!\r\n--Jacob\r\n\r\n[1] https://bugzilla.mozilla.org/show_bug.cgi?id=103752\r\n\r\n", "msg_date": "Thu, 16 Dec 2021 18:44:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2021-12-16 at 10:50 -0500, Andrew Dunstan wrote:\r\n> Good job, this is certainly going to be useful.\r\n\r\nThanks!\r\n\r\n> I don't think we should fall back on the CN. It would seem quite odd to\r\n> do so for IP addresses but not for DNS names.\r\n\r\nSo there's at least one compatibility concern with disabling the\r\nfallback, in that there could be existing users that are happily using\r\na certificate with an IP address CN, and libpq is just ignoring any\r\niPAddress SANs that the certificate has. Once libpq becomes aware of\r\nthose, it will stop accepting the CN and the certificate might stop\r\nworking.\r\n\r\nPersonally I think that's acceptable, but it would probably warrant a\r\nrelease note or some such.\r\n\r\nI will work on implementing behavior that's modeled off of the NSS\r\nmatching logic (see my reply to Horiguchi-san), which will at least\r\nmake it more logically consistent, and we can see what that looks like?\r\n\r\nThanks for the review!\r\n--Jacob\r\n", "msg_date": "Thu, 16 Dec 2021 19:14:58 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Thu, 16 Dec 2021 18:44:54 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Thu, 2021-12-16 at 14:54 +0900, Kyotaro Horiguchi wrote:\n> > It seems like saying that we must search for iPAddress and mustn't use\n> > CN nor dNSName if the client connected using IP address. Otherwise, if\n> > the host name is a domain name, we use only dNSName if present, and\n> > use CN otherwise. That behavior seems agreeing to what you wrote as\n> > NSS's behavior.\n> \n> NSS departs slightly from the spec and will additionally try to match\n> an IP address against the CN, but only if there are no iPAddresses in\n> the SAN. It roughly matches the logic for DNS names.\n\nOpenSSL seems different. X509_check_host() tries SAN then CN iff SAN\ndoesn't exist. X509_check_ip() tries SAN and completely ignores\niPAdress and CN.\n\n> Here's the description of the NSS behavior and some of the reasoning\n> behind it, quoted from a developer on Bugzilla [1]:\n> \n> > Elsewhere in RFC 2818, it says \n> > \n> > If a subjectAltName extension of type dNSName is present, that MUST\n> > be used as the identity. Otherwise, the (most specific) Common Name\n> > field in the Subject field of the certificate MUST be used. \n> > \n> > Notice that this section is not conditioned upon the URI being a hostname\n> > and not an IP address. So this statement conflicts with the one cited \n> > above. \n> > \n> > I implemented this policy:\n> > \n> > if the URI contains a host name\n> > if the subject alt name is present and has one or more DNS names\n> > use the DNS names in that extension as the server identity\n> > else\n> > use the subject common name as the server identity\n> > else if the URI contains an IP address\n> > if the subject alt name is present and has one or more IP addresses\n> > use the IP addresses in that extension as the server identity\n> > else\n> > compare the URI IP address string with the subject common name.\n(Wow. The article is 20-years old.)\n\n*I* am fine with it.\n\n> It sounds like both you and Andrew might be comfortable with that same\n> behavior? I think it looks like a sane solution, so I'll implement that\n> and we can see what it looks like. (My work on this will be paused over\n> the end-of-year holidays.)\n\n> > I'm not sure about ipv4 comptible addresses. However, I think we can\n> > identify ipv4 compatible address easily.\n> \n> Yeah, it would probably not be a difficult feature to add later.\n\nI agree.\n\n> > + * pg_inet_net_pton() will accept CIDR masks, which we don't want to\n> > + * match, so skip the comparison if the host string contains a slash.\n> > + */\n> > + if (!strchr(host, '/')\n> > + && pg_inet_net_pton(PGSQL_AF_INET6, host, addr, -1) == 128)\n> > \n> > If a cidr is given, pg_inet_net_pton returns a number less than 128 so\n> > we don't need to check '/' explicity? (I'm not sure '/128' is\n> > sensible but doesn't harm..)\n> \n> Personally I think that, if someone wants your libpq to connect to a\n> server with a hostname of \"some:ipv6::address/128\", then they are\n> trying to pull something (evading a poorly coded blocklist, perhaps?)\n> and we should not allow that to match an IP. Thoughts?\n\nIf the client could connect to the network-address, it could be said\nthat we can assume that address is the name:p Just kidding.\n\nAs the name suggests, the function reads a network address. And the\nonly user is network_in(). I think we should provide pg_inet_pton()\ninstead of abusing pg_inet_net_pton(). inet_net_pton_*() functions\ncan be modified to reject /cidr part without regression so we are able\nto have pg_inet_pton() with a small amount of change.\n\n- inet_net_pton_ipv4(const char *src, u_char *dst)\n+ inet_net_pton_ipv4_internal(const char *src, u_char *dst, bool netaddr)\n\n+ inet_net_pton_ipv4(const char *src, u_char *dst)\n (calls inet_net_pton_ipv4_internal(src, dst, true))\n+ inet_pton_ipv4(const char *src, u_char *dst)\n (calls inet_net_pton_ipv4_internal(src, dst, false))\n\n> Thanks for the review!\n> --Jacob\n> \n> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=103752\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Dec 2021 15:40:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "Sorry for the silly mistake.\n\nAt Fri, 17 Dec 2021 15:40:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > NSS departs slightly from the spec and will additionally try to match\n> > an IP address against the CN, but only if there are no iPAddresses in\n> > the SAN. It roughly matches the logic for DNS names.\n> \n> OpenSSL seems different. X509_check_host() tries SAN then CN iff SAN\n> doesn't exist. X509_check_ip() tries SAN and completely ignores\n> iPAdress and CN.\n\nOpenSSL seems different. X509_check_host() tries SAN then CN iff SAN\ndoesn't exist. X509_check_ip() tries iPAddress and completely ignores\nCN.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Dec 2021 16:54:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "Hi,\n\nOn 2021-12-16 01:13:57 +0000, Jacob Champion wrote:\n> Attached is a patch for libpq to support IP addresses in the server's\n> Subject Alternative Names, which would allow admins to issue certs for\n> multiple IP addresses, both IPv4 and IPv6, and mix them with\n> alternative DNS hostnames. These addresses are compared bytewise\n> instead of stringwise, so the client can contact the server via\n> alternative spellings of the same IP address.\n\nThis fails to build on windows:\nhttps://cirrus-ci.com/task/6734650927218688?logs=build#L1029\n\n[14:33:28.277] network.obj : error LNK2019: unresolved external symbol pg_inet_net_pton referenced in function network_in [c:\\cirrus\\postgres.vcxproj]\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Jan 2022 13:29:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Fri, 2021-12-17 at 16:54 +0900, Kyotaro Horiguchi wrote:\r\n> Sorry for the silly mistake.\r\n> \r\n> At Fri, 17 Dec 2021 15:40:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \r\n> > > NSS departs slightly from the spec and will additionally try to match\r\n> > > an IP address against the CN, but only if there are no iPAddresses in\r\n> > > the SAN. It roughly matches the logic for DNS names.\r\n> > \r\n> > OpenSSL seems different. X509_check_host() tries SAN then CN iff SAN\r\n> > doesn't exist. X509_check_ip() tries SAN and completely ignores\r\n> > iPAdress and CN.\r\n> \r\n> OpenSSL seems different. X509_check_host() tries SAN then CN iff SAN\r\n> doesn't exist. X509_check_ip() tries iPAddress and completely ignores\r\n> CN.\r\n\r\nRight.\r\n\r\nOn Fri, 2021-12-17 at 15:40 +0900, Kyotaro Horiguchi wrote:\r\n>\r\n> > + * pg_inet_net_pton() will accept CIDR masks, which we don't want to\r\n> > > + * match, so skip the comparison if the host string contains a slash.\r\n> > > + */\r\n> > > + if (!strchr(host, '/')\r\n> > > + && pg_inet_net_pton(PGSQL_AF_INET6, host, addr, -1) == 128)\r\n> > > \r\n> > > If a cidr is given, pg_inet_net_pton returns a number less than 128 so\r\n> > > we don't need to check '/' explicity? (I'm not sure '/128' is\r\n> > > sensible but doesn't harm..)\r\n> > \r\n> > Personally I think that, if someone wants your libpq to connect to a\r\n> > server with a hostname of \"some:ipv6::address/128\", then they are\r\n> > trying to pull something (evading a poorly coded blocklist, perhaps?)\r\n> > and we should not allow that to match an IP. Thoughts?\r\n> \r\n> If the client could connect to the network-address, it could be said\r\n> that we can assume that address is the name:p Just kidding.\r\n> \r\n> As the name suggests, the function reads a network address. And the\r\n> only user is network_in(). I think we should provide pg_inet_pton()\r\n> instead of abusing pg_inet_net_pton(). inet_net_pton_*() functions\r\n> can be modified to reject /cidr part without regression so we are able\r\n> to have pg_inet_pton() with a small amount of change.\r\n> \r\n> - inet_net_pton_ipv4(const char *src, u_char *dst)\r\n> + inet_net_pton_ipv4_internal(const char *src, u_char *dst, bool netaddr)\r\n> \r\n> + inet_net_pton_ipv4(const char *src, u_char *dst)\r\n> (calls inet_net_pton_ipv4_internal(src, dst, true))\r\n> + inet_pton_ipv4(const char *src, u_char *dst)\r\n> (calls inet_net_pton_ipv4_internal(src, dst, false))\r\n\r\nSounds good, I will make that change. Thanks for the feedback!\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 3 Jan 2022 16:19:07 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Sun, 2022-01-02 at 13:29 -0800, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2021-12-16 01:13:57 +0000, Jacob Champion wrote:\r\n> > Attached is a patch for libpq to support IP addresses in the server's\r\n> > Subject Alternative Names, which would allow admins to issue certs for\r\n> > multiple IP addresses, both IPv4 and IPv6, and mix them with\r\n> > alternative DNS hostnames. These addresses are compared bytewise\r\n> > instead of stringwise, so the client can contact the server via\r\n> > alternative spellings of the same IP address.\r\n> \r\n> This fails to build on windows:\r\n> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcirrus-ci.com%2Ftask%2F6734650927218688%3Flogs%3Dbuild%23L1029&amp;data=04%7C01%7Cpchampion%40vmware.com%7C2b2171168f3c4935e89f08d9ce36f790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637767557770534489%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=JtfsPtershSljU1oDGrkL8bQiHYB3iMfUgTqlh%2B4wbs%3D&amp;reserved=0\r\n> \r\n> [14:33:28.277] network.obj : error LNK2019: unresolved external symbol pg_inet_net_pton referenced in function network_in [c:\\cirrus\\postgres.vcxproj]\r\n\r\nThanks for the heads up; I'll fix that while I'm implementing the\r\ninternal API.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 3 Jan 2022 16:21:08 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2021-12-16 at 18:44 +0000, Jacob Champion wrote:\r\n> It sounds like both you and Andrew might be comfortable with that same\r\n> behavior? I think it looks like a sane solution, so I'll implement that\r\n> and we can see what it looks like. (My work on this will be paused over\r\n> the end-of-year holidays.)\r\n\r\nv2 implements the discussed CN/SAN fallback behavior and should fix the\r\nbuild on Windows. Still TODO is the internal pg_inet_pton() refactoring\r\nthat you asked for; I'm still deciding how best to approach it.\r\n\r\nChanges only in since-v1.diff.txt.\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Tue, 4 Jan 2022 22:58:14 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 2022-01-03 at 16:19 +0000, Jacob Champion wrote:\r\n> On Fri, 2021-12-17 at 15:40 +0900, Kyotaro Horiguchi wrote:\r\n> > \r\n> > + inet_net_pton_ipv4(const char *src, u_char *dst)\r\n> > (calls inet_net_pton_ipv4_internal(src, dst, true))\r\n> > + inet_pton_ipv4(const char *src, u_char *dst)\r\n> > (calls inet_net_pton_ipv4_internal(src, dst, false))\r\n> \r\n> Sounds good, I will make that change. Thanks for the feedback!\r\n\r\nv3 implements a pg_inet_pton(), but for IPv6 instead of IPv4 as\r\npresented above (since we only need inet_pton() for IPv6 in this case).\r\nIt's split into a separate patch (0003) for ease of review.\r\n\r\nThanks!\r\n--Jacob", "msg_date": "Thu, 6 Jan 2022 00:02:27 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Thu, 6 Jan 2022 00:02:27 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Mon, 2022-01-03 at 16:19 +0000, Jacob Champion wrote:\n> > On Fri, 2021-12-17 at 15:40 +0900, Kyotaro Horiguchi wrote:\n> > > \n> > > + inet_net_pton_ipv4(const char *src, u_char *dst)\n> > > (calls inet_net_pton_ipv4_internal(src, dst, true))\n> > > + inet_pton_ipv4(const char *src, u_char *dst)\n> > > (calls inet_net_pton_ipv4_internal(src, dst, false))\n> > \n> > Sounds good, I will make that change. Thanks for the feedback!\n> \n> v3 implements a pg_inet_pton(), but for IPv6 instead of IPv4 as\n> presented above (since we only need inet_pton() for IPv6 in this case).\n> It's split into a separate patch (0003) for ease of review.\n\n0001 looks fine as it is in the almost same shape withinet_net_pton\nabout PGSQL_AF_INET and PGSQL_AF_INET6. I'm not sure about the\ndifference on how to handle AF_INET6 between pg_inet_net_pton and ntop\nbut that's not a matter of this patch.\n\nHowever, 0002,\n\n+/*\n+ * In a frontend build, we can't include inet.h, but we still need to have\n+ * sensible definitions of these two constants. Note that pg_inet_net_ntop()\n+ * assumes that PGSQL_AF_INET is equal to AF_INET.\n+ */\n+#define PGSQL_AF_INET\t(AF_INET + 0)\n+#define PGSQL_AF_INET6\t(AF_INET + 1)\n+\n\nNow we have the same definition thrice in frontend code. Coulnd't we\ndefine them in, say, libpq-fe.h or inet-fe.h (nonexistent) then\ninclude it from the three files?\n\n\n+$node->connect_fails(\n+\t\"$common_connstr host=192.0.2.2\",\n+\t\"host not matching an IPv4 address (Subject Alternative Name 1)\",\n\nIt is not the real IP address of the server.\n\nhttps://datatracker.ietf.org/doc/html/rfc6125\n> In some cases, the URI is specified as an IP address rather than a\n> hostname. In this case, the iPAddress subjectAltName must be\n> present in the certificate and must exactly match the IP in the URI.\n\nWhen IP address is embedded in URI, it won't be translated to another\nIP address. Concretely https://192.0.1.5/hoge cannot reach to the host\n192.0.1.8. On the other hand, as done in the test, libpq allows that\nwhen \"host=192.0.1.5 hostaddr=192.0.1.8\". I can't understand what we\nare doing in that case. Don't we need to match the SAN IP address\nwith hostaddr instead of host?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 Jan 2022 17:29:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 2022-01-31 at 17:29 +0900, Kyotaro Horiguchi wrote:\r\n> However, 0002,\r\n> \r\n> +/*\r\n> + * In a frontend build, we can't include inet.h, but we still need to have\r\n> + * sensible definitions of these two constants. Note that pg_inet_net_ntop()\r\n> + * assumes that PGSQL_AF_INET is equal to AF_INET.\r\n> + */\r\n> +#define PGSQL_AF_INET (AF_INET + 0)\r\n> +#define PGSQL_AF_INET6 (AF_INET + 1)\r\n> +\r\n> \r\n> Now we have the same definition thrice in frontend code. Coulnd't we\r\n> define them in, say, libpq-fe.h or inet-fe.h (nonexistent) then\r\n> include it from the three files?\r\n\r\nI started down the inet-fe.h route, and then realized I didn't know\r\nwhere that should go. Does it need to be included in (or part of)\r\nport.h? And should it be installed as part of the logic in\r\nsrc/include/Makefile?\r\n\r\n> +$node->connect_fails(\r\n> + \"$common_connstr host=192.0.2.2\",\r\n> + \"host not matching an IPv4 address (Subject Alternative Name 1)\",\r\n> \r\n> It is not the real IP address of the server.\r\n> \r\n> https://datatracker.ietf.org/doc/html/rfc6125\r\n> > In some cases, the URI is specified as an IP address rather than a\r\n> > hostname. In this case, the iPAddress subjectAltName must be\r\n> > present in the certificate and must exactly match the IP in the URI.\r\n> \r\n> When IP address is embedded in URI, it won't be translated to another\r\n> IP address. Concretely https://192.0.1.5/hoge cannot reach to the host\r\n> 192.0.1.8. On the other hand, as done in the test, libpq allows that\r\n> when \"host=192.0.1.5 hostaddr=192.0.1.8\". I can't understand what we\r\n> are doing in that case. Don't we need to match the SAN IP address\r\n> with hostaddr instead of host?\r\n\r\nI thought that host, not hostaddr, was the part that corresponded to\r\nthe URI. So in a hypothetical future where postgresqls:// exists, the\r\ntwo URIs\r\n\r\n postgresqls://192.0.2.2:5432/db\r\n postgresqls://192.0.2.2:5432/db?hostaddr=127.0.0.1\r\n\r\nshould both be expecting the same certificate. That seems to match the\r\nlibpq documentation as well.\r\n\r\n(Specifying a host parameter is also allowed... that seems like it\r\ncould cause problems for a hypothetical postgresqls:// scheme, but it's\r\nprobably not relevant for this thread.)\r\n\r\n--Jacob\r\n\r\n\r\n", "msg_date": "Wed, 2 Feb 2022 19:46:13 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Wed, 2 Feb 2022 19:46:13 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Mon, 2022-01-31 at 17:29 +0900, Kyotaro Horiguchi wrote:\n> > +#define PGSQL_AF_INET (AF_INET + 0)\n> > +#define PGSQL_AF_INET6 (AF_INET + 1)\n> > +\n> > \n> > Now we have the same definition thrice in frontend code. Coulnd't we\n> > define them in, say, libpq-fe.h or inet-fe.h (nonexistent) then\n> > include it from the three files?\n> \n> I started down the inet-fe.h route, and then realized I didn't know\n> where that should go. Does it need to be included in (or part of)\n> port.h? And should it be installed as part of the logic in\n> src/include/Makefile?\n\nI don't think it should be a part of port.h. Though I suggested\nfrontend-only header file by the name, isn't it enough to separate out\nthe definitions from utils/inet.h to common/inet-common.h then include\nthe inet-common.h from inet.h?\n\n> > When IP address is embedded in URI, it won't be translated to another\n> > IP address. Concretely https://192.0.1.5/hoge cannot reach to the host\n> > 192.0.1.8. On the other hand, as done in the test, libpq allows that\n> > when \"host=192.0.1.5 hostaddr=192.0.1.8\". I can't understand what we\n> > are doing in that case. Don't we need to match the SAN IP address\n> > with hostaddr instead of host?\n> \n> I thought that host, not hostaddr, was the part that corresponded to\n> the URI. So in a hypothetical future where postgresqls:// exists, the\n> two URIs\n> \n> postgresqls://192.0.2.2:5432/db\n> postgresqls://192.0.2.2:5432/db?hostaddr=127.0.0.1\n> \n> should both be expecting the same certificate. That seems to match the\n> libpq documentation as well.\n\nHmm. Well, considering that the objective for the validation is to\ncheck if the server is actually the client is intending to connect, it\nis fine. Sorry for the noise.\n\n> (Specifying a host parameter is also allowed... that seems like it\n> could cause problems for a hypothetical postgresqls:// scheme, but it's\n> probably not relevant for this thread.)\n\nYeah.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 03 Feb 2022 16:23:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2022-02-03 at 16:23 +0900, Kyotaro Horiguchi wrote:\r\n> At Wed, 2 Feb 2022 19:46:13 +0000, Jacob Champion <pchampion@vmware.com> wrote in \r\n> > On Mon, 2022-01-31 at 17:29 +0900, Kyotaro Horiguchi wrote:\r\n> > > +#define PGSQL_AF_INET (AF_INET + 0)\r\n> > > +#define PGSQL_AF_INET6 (AF_INET + 1)\r\n> > > +\r\n> > > \r\n> > > Now we have the same definition thrice in frontend code. Coulnd't we\r\n> > > define them in, say, libpq-fe.h or inet-fe.h (nonexistent) then\r\n> > > include it from the three files?\r\n> > \r\n> > I started down the inet-fe.h route, and then realized I didn't know\r\n> > where that should go. Does it need to be included in (or part of)\r\n> > port.h? And should it be installed as part of the logic in\r\n> > src/include/Makefile?\r\n> \r\n> I don't think it should be a part of port.h. Though I suggested\r\n> frontend-only header file by the name, isn't it enough to separate out\r\n> the definitions from utils/inet.h to common/inet-common.h then include\r\n> the inet-common.h from inet.h?\r\n\r\nThat works a lot better than what I had in my head. Done that way in\r\nv4. Thanks!\r\n\r\n--Jacob", "msg_date": "Fri, 4 Feb 2022 17:06:53 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Fri, 4 Feb 2022 17:06:53 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> That works a lot better than what I had in my head. Done that way in\n> v4. Thanks!\n\nThanks!\n\n0002:\n\n+#define PGSQL_AF_INET\t(AF_INET + 0)\n+#define PGSQL_AF_INET6\t(AF_INET + 1)\n..\n-#define PGSQL_AF_INET\t(AF_INET + 0)\n-#define PGSQL_AF_INET6\t(AF_INET + 1)\n\nI feel this should be a part of 0001. (But the patches will be\nfinally merged so maybe no need to bother moving it).\n\n\n\n> * The use of inet_aton() instead of inet_pton() is deliberate; the\n> * latter cannot handle alternate IPv4 notations (\"numbers-and-dots\").\n\nI think we should be consistent in handling IP addresses. We have\nboth inet_pton and inet_aton to parse IPv4 addresses.\n\nWe use inet_pton in the inet type (network_in).\nWe use inet_aton in server addresses.\n\n# Hmm. I'm surprised to see listen_addresses accepts \"0x7f.1\".\n# I think we should accept the same by network_in but it is another\n# issue.\n\nSo, inet_aton there seems to be the right choice but the comment\ndoesn't describe the reason for that behavior. I think we should add\nan explanation about the reason for the behavior, maybe something like\nthis:\n\n> We accept alternative IPv4 address notations that are accepted by\n> inet_aton but not by inet_pton as server address.\n\n\n\n+\t * GEN_IPADD is an OCTET STRING containing an IP address in network byte\n+\t * order.\n\n+\t/* OK to cast from unsigned to plain char, since it's all ASCII. */\n+\treturn pq_verify_peer_name_matches_certificate_ip(conn, (const char *) addrdata, len, store_name);\n\nAren't the two comments contradicting each other? The retruned general\nname looks like an octet array, which is not a subset of ASCII\nstring. So pq_verify_peer_name_matches_certificate_ip should receive\naddrdata as \"const unsigned char *\", without casting.\n\n\n\n+\t\t\tif (name->type == host_type)\n+\t\t\t\tcheck_cn = false;\n\nDon't we want a concise coment for this?\n\n\n\n-\tif (*names_examined == 0)\n+\tif ((rc == 0) && check_cn)\n\nTo me, it seems a bit hard to understand. We can set false to\ncheck_cn in the rc != 0 path in the loop on i, like this:\n\n> \t\t\tif (rc != 0)\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * don't fall back to CN when we have a match or have an error\n> +\t\t\t\t */\n> +\t\t\t\tcheck_cn = false;\n> \t\t\t\tbreak;\n> +\t\t\t}\n...\n> -\tif ((rc == 0) && check_cn)\n> +\tif (check_cn)\n\n\n\nThe following existing code (CN fallback)\n\n>\trc = openssl_verify_peer_name_matches_certificate_name(conn,\n> X509_NAME_ENTRY_get_data(X509_NAME_get_entry(subject_name, cn_index)),\n> first_name);\n\nis expecting that first_name has not been set when it is visited.\nHowever, with this patch, first_name can be set when the cert has any\nSAN of unmatching type (DNS/IPADD) and the already-set name leaks. We\nneed to avoid that memory leak since the path can be visited multiple\ntimes from the user-program of libpq. I came up with two directions.\n\n1. Completely ignore type-unmatching entries. first_name is not set by\n such entries. Such unmatching entreis doesn't increase\n *names_examined.\n\n2. Avoid overwriting first_name there.\n\nI like 1, but since we don't make distinction between DNS and IPADDR\nin the error message emited by the caller, we would take 2?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Feb 2022 17:29:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 2022-02-07 at 17:29 +0900, Kyotaro Horiguchi wrote:\r\n> At Fri, 4 Feb 2022 17:06:53 +0000, Jacob Champion <pchampion@vmware.com> wrote in \r\n> > That works a lot better than what I had in my head. Done that way in\r\n> > v4. Thanks!\r\n> \r\n> Thanks!\r\n> \r\n> 0002:\r\n> \r\n> +#define PGSQL_AF_INET (AF_INET + 0)\r\n> +#define PGSQL_AF_INET6 (AF_INET + 1)\r\n> ..\r\n> -#define PGSQL_AF_INET (AF_INET + 0)\r\n> -#define PGSQL_AF_INET6 (AF_INET + 1)\r\n> \r\n> I feel this should be a part of 0001. (But the patches will be\r\n> finally merged so maybe no need to bother moving it).\r\n\r\nOkay. I can move it easily if you feel like it would help review, but\r\nfor now I've kept it in 0002.\r\n\r\n> > * The use of inet_aton() instead of inet_pton() is deliberate; the\r\n> > * latter cannot handle alternate IPv4 notations (\"numbers-and-dots\").\r\n> \r\n> I think we should be consistent in handling IP addresses. We have\r\n> both inet_pton and inet_aton to parse IPv4 addresses.\r\n> \r\n> We use inet_pton in the inet type (network_in).\r\n> We use inet_aton in server addresses.\r\n> \r\n> # Hmm. I'm surprised to see listen_addresses accepts \"0x7f.1\".\r\n> # I think we should accept the same by network_in but it is another\r\n> # issue.\r\n\r\nYeah, that's an interesting inconsistency.\r\n\r\n> So, inet_aton there seems to be the right choice but the comment\r\n> doesn't describe the reason for that behavior. I think we should add\r\n> an explanation about the reason for the behavior, maybe something like\r\n> this:\r\n> \r\n> > We accept alternative IPv4 address notations that are accepted by\r\n> > inet_aton but not by inet_pton as server address.\r\n\r\nI've pulled this wording into the comment in v5, attached.\r\n\r\n> + * GEN_IPADD is an OCTET STRING containing an IP address in network byte\r\n> + * order.\r\n> \r\n> + /* OK to cast from unsigned to plain char, since it's all ASCII. */\r\n> + return pq_verify_peer_name_matches_certificate_ip(conn, (const char *) addrdata, len, store_name);\r\n> \r\n> Aren't the two comments contradicting each other? The retruned general\r\n> name looks like an octet array, which is not a subset of ASCII\r\n> string. So pq_verify_peer_name_matches_certificate_ip should receive\r\n> addrdata as \"const unsigned char *\", without casting.\r\n\r\nBad copy-paste on my part; thanks for the catch. Fixed.\r\n\r\n> + if (name->type == host_type)\r\n> + check_cn = false;\r\n> \r\n> Don't we want a concise coment for this?\r\n\r\nAdded one; see what you think.\r\n\r\n> - if (*names_examined == 0)\r\n> + if ((rc == 0) && check_cn)\r\n> \r\n> To me, it seems a bit hard to understand. We can set false to\r\n> check_cn in the rc != 0 path in the loop on i, like this:\r\n> \r\n> > if (rc != 0)\r\n> > + {\r\n> > + /*\r\n> > + * don't fall back to CN when we have a match or have an error\r\n> > + */\r\n> > + check_cn = false;\r\n> > break;\r\n> > + }\r\n> ...\r\n> > - if ((rc == 0) && check_cn)\r\n> > + if (check_cn)\r\n\r\nIf I understand right, that's not quite equivalent (and the new tests\r\nfail if I implement it that way). We have to disable fallback if the\r\nSAN exists, whether it matches or not.\r\n\r\n> The following existing code (CN fallback)\r\n> \r\n> > rc = openssl_verify_peer_name_matches_certificate_name(conn,\r\n> > X509_NAME_ENTRY_get_data(X509_NAME_get_entry(subject_name, cn_index)),\r\n> > first_name);\r\n> \r\n> is expecting that first_name has not been set when it is visited.\r\n> However, with this patch, first_name can be set when the cert has any\r\n> SAN of unmatching type (DNS/IPADD) and the already-set name leaks. We\r\n> need to avoid that memory leak since the path can be visited multiple\r\n> times from the user-program of libpq. I came up with two directions.\r\n> \r\n> 1. Completely ignore type-unmatching entries. first_name is not set by\r\n> such entries. Such unmatching entreis doesn't increase\r\n> *names_examined.\r\n> \r\n> 2. Avoid overwriting first_name there.\r\n> \r\n> I like 1, but since we don't make distinction between DNS and IPADDR\r\n> in the error message emited by the caller, we would take 2?\r\n\r\nGreat catch, thanks! I implemented option 2 to start. Option 1 might\r\nmake things difficult to debug if you're connecting to a server by IP\r\naddress but its certificate only has DNS names.\r\n\r\nThanks!\r\n--Jacob", "msg_date": "Wed, 9 Feb 2022 00:52:48 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "(This needs rebasing)\n\nAt Wed, 9 Feb 2022 00:52:48 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Mon, 2022-02-07 at 17:29 +0900, Kyotaro Horiguchi wrote:\n> > I feel this should be a part of 0001. (But the patches will be\n> > finally merged so maybe no need to bother moving it).\n> \n> Okay. I can move it easily if you feel like it would help review, but\n> for now I've kept it in 0002.\n\nThanks.\n\n> > So, inet_aton there seems to be the right choice but the comment\n> > doesn't describe the reason for that behavior. I think we should add\n> > an explanation about the reason for the behavior, maybe something like\n> > this:\n> > \n> > > We accept alternative IPv4 address notations that are accepted by\n> > > inet_aton but not by inet_pton as server address.\n> \n> I've pulled this wording into the comment in v5, attached.\n\n> > + if (name->type == host_type)\n> > + check_cn = false;\n> > \n> > Don't we want a concise coment for this?\n> \n> Added one; see what you think.\n\nThat's fine with me.\n\n> > > if (rc != 0)\n> > > + {\n> > > + /*\n> > > + * don't fall back to CN when we have a match or have an error\n> > > + */\n> > > + check_cn = false;\n> > > break;\n> > > + }\n> > ...\n> > > - if ((rc == 0) && check_cn)\n> > > + if (check_cn)\n> \n> If I understand right, that's not quite equivalent (and the new tests\n> fail if I implement it that way). We have to disable fallback if the\n> SAN exists, whether it matches or not.\n\n# I forgot to mention that, the test fails for me even without the\n# change. I didn't checked what is wrong there, though.\n\nMmm. after the end of the loop, rc is non-zero only when the loop has\nexited by the break and otherwise rc is zero. Why isn't it equivalent\nto setting check_cn to false at the break?\n\nAnyway, apart from that detail, I reconfirmed the spec the patch is\ngoing to implement.\n\n * If connhost contains a DNS name, and the certificate's SANs contain any\n * dNSName entries, then we'll ignore the Subject Common Name entirely;\n * otherwise, we fall back to checking the CN. (This behavior matches the\n * RFC.)\n\nSure.\n\n * If connhost contains an IP address, and the SANs contain iPAddress\n * entries, we again ignore the CN. Otherwise, we allow the CN to match,\n * EVEN IF there is a dNSName in the SANs. (RFC 6125 prohibits this: \"A\n * client MUST NOT seek a match for a reference identifier of CN-ID if the\n * presented identifiers include a DNS-ID, SRV-ID, URI-ID, or any\n * application-specific identifier types supported by the client.\")\n\nActually the patch searches for a match of IP address connhost from\ndNSName SANs even if iPAddress SANs exist. I think we've not\nexplicitly defined thebehavior in that case. I supposed that we only\nbe deviant in the case \"IP address connhost and no SANs of any type\nexists\". What do you think about it?\n\n- For the certificate that have only dNSNames or no SANs presented, we\n serach for a match from all dNSNames if any or otherwise try CN\n regardless of the type of connhost.\n\n- Otherwise (the cert has at least one iPAddress SANs) we follow the RFCs.\n\n - For IP-addr connhost, we search only the iPAddress SANs.\n\n - For DNSName connhost, we search only dNSName SANs if any or\n otherwise try CN.\n\nHonestly I didn't consider to that detail. On second thought, with\nthis specification we cannot decide the behavior unless we scanned all\nSANs. Maybe we can find an elegant implement but I don't think people\nhere would welcome even that level of complexity needed only for that\ndubious existing use case.\n\nWhat do you think about this? And I'd like to hear from others.\n\n> Great catch, thanks! I implemented option 2 to start. Option 1 might\n> make things difficult to debug if you're connecting to a server by IP\n> address but its certificate only has DNS names.\n\nLooks fine. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Feb 2022 15:16:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Tue, 2022-02-15 at 15:16 +0900, Kyotaro Horiguchi wrote:\r\n> (This needs rebasing)\r\n\r\nDone in v6, attached.\r\n\r\n> # I forgot to mention that, the test fails for me even without the\r\n> # change. I didn't checked what is wrong there, though.\r\n\r\nAh. We should probably figure that out, then -- what failures do you\r\nsee?\r\n\r\n> Mmm. after the end of the loop, rc is non-zero only when the loop has\r\n> exited by the break and otherwise rc is zero. Why isn't it equivalent\r\n> to setting check_cn to false at the break?\r\n\r\ncheck_cn can be false if rc is zero, too; it means that we found a SAN\r\nof the correct type but it didn't match.\r\n\r\n> Anyway, apart from that detail, I reconfirmed the spec the patch is\r\n> going to implement.\r\n> \r\n> * If connhost contains a DNS name, and the certificate's SANs contain any\r\n> * dNSName entries, then we'll ignore the Subject Common Name entirely;\r\n> * otherwise, we fall back to checking the CN. (This behavior matches the\r\n> * RFC.)\r\n> \r\n> Sure.\r\n> \r\n> * If connhost contains an IP address, and the SANs contain iPAddress\r\n> * entries, we again ignore the CN. Otherwise, we allow the CN to match,\r\n> * EVEN IF there is a dNSName in the SANs. (RFC 6125 prohibits this: \"A\r\n> * client MUST NOT seek a match for a reference identifier of CN-ID if the\r\n> * presented identifiers include a DNS-ID, SRV-ID, URI-ID, or any\r\n> * application-specific identifier types supported by the client.\")\r\n> \r\n> Actually the patch searches for a match of IP address connhost from\r\n> dNSName SANs even if iPAddress SANs exist. I think we've not\r\n> explicitly defined thebehavior in that case.\r\n\r\nThat's a good point; I didn't change the prior behavior. I feel more\r\ncomfortable leaving that check, since it is technically possible to\r\npush something that looks like an IP address into a dNSName SAN. We\r\nshould probably make an explicit decision on that, as you say.\r\n\r\nBut I don't think that contradicts the code comment, does it? The\r\ncomment is just talking about CN fallback scenarios. If you find a\r\nmatch in a dNSName, there's no reason to fall back to the CN.\r\n\r\n> I supposed that we only\r\n> be deviant in the case \"IP address connhost and no SANs of any type\r\n> exists\". What do you think about it?\r\n\r\nWe fall back in the case of \"IP address connhost and dNSName SANs\r\nexist\", which is prohibited by that part of RFC 6125. I don't think we\r\ndeviate in the case you described; can you explain further?\r\n\r\n> - For the certificate that have only dNSNames or no SANs presented, we\r\n> serach for a match from all dNSNames if any or otherwise try CN\r\n> regardless of the type of connhost.\r\n\r\nCorrect. (I don't find that way of dividing up the cases very\r\nintuitive, though.)\r\n\r\n> - Otherwise (the cert has at least one iPAddress SANs) we follow the RFCs.\r\n> \r\n> - For IP-addr connhost, we search only the iPAddress SANs.\r\n\r\nWe search the dNSNames as well, as you pointed out above. But we don't\r\nfall back to the CN.\r\n\r\n> - For DNSName connhost, we search only dNSName SANs if any or\r\n> otherwise try CN.\r\n\r\nEffectively, yes. (We call the IP address verification functions too,\r\nto get alt_name, but they can't match. If that's too confusing, we'd\r\nneed to pull the alt_name handling up out of the verification layer.)\r\n\r\n> Honestly I didn't consider to that detail. On second thought, with\r\n> this specification we cannot decide the behavior unless we scanned all\r\n> SANs.\r\n\r\nRight.\r\n\r\n> Maybe we can find an elegant implement but I don't think people\r\n> here would welcome even that level of complexity needed only for that\r\n> dubious existing use case.\r\n\r\nWhich use case do you mean?\r\n\r\n> What do you think about this? And I'd like to hear from others.\r\n\r\nI think we need to decide whether or not to keep the current \"IP\r\naddress connhost can match a dNSName SAN\" behavior, and if so I need to\r\nadd it to the test cases. (And we need to figure out why the tests are\r\nfailing in your build, of course.)\r\n\r\nThanks!\r\n--Jacob", "msg_date": "Thu, 17 Feb 2022 17:29:15 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Thu, 17 Feb 2022 17:29:15 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Tue, 2022-02-15 at 15:16 +0900, Kyotaro Horiguchi wrote:\n> > (This needs rebasing)\n> \n> Done in v6, attached.\n\nThanks!\n\n> > # I forgot to mention that, the test fails for me even without the\n> > # change. I didn't checked what is wrong there, though.\n> \n> Ah. We should probably figure that out, then -- what failures do you\n> see?\n\nI forgot the detail but v6 still fails for me. I think it is that.\n\nt/003_sslinfo.pl ... 1/? # Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 6.\nt/003_sslinfo.pl ... Dubious, test returned 29 (wstat 7424, 0x1d00)\nAll 6 subtests passed \n...\nResult: FAIL\n\nThe script complains like this:\n\nok 6 - ssl_client_cert_present() for connection with cert\nconnection error: 'psql: error: connection to server at \"127.0.0.1\", port 62656 failed: SSL error: tlsv1 alert unknown ca'\nwhile running 'psql -XAtq -d sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=trustdb hostaddr=127.0.0.1 user=ssltestuser host=localhost -f - -v ON_ERROR_STOP=1' at /home/horiguti/work/worktrees/ipsan/src/test/ssl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1873.\n\nSo, psql looks like disliking the ca certificate. I also will dig\ninto that.\n\n> > Mmm. after the end of the loop, rc is non-zero only when the loop has\n> > exited by the break and otherwise rc is zero. Why isn't it equivalent\n> > to setting check_cn to false at the break?\n> \n> check_cn can be false if rc is zero, too; it means that we found a SAN\n> of the correct type but it didn't match.\n\nDon't we count unmatched certs as \"existed\"? In that case I think we\ndon't go to CN.\n\n> > Anyway, apart from that detail, I reconfirmed the spec the patch is\n> > going to implement.\n> > \n> > * If connhost contains a DNS name, and the certificate's SANs contain any\n> > * dNSName entries, then we'll ignore the Subject Common Name entirely;\n> > * otherwise, we fall back to checking the CN. (This behavior matches the\n> > * RFC.)\n> > \n> > Sure.\n> > \n> > * If connhost contains an IP address, and the SANs contain iPAddress\n> > * entries, we again ignore the CN. Otherwise, we allow the CN to match,\n> > * EVEN IF there is a dNSName in the SANs. (RFC 6125 prohibits this: \"A\n> > * client MUST NOT seek a match for a reference identifier of CN-ID if the\n> > * presented identifiers include a DNS-ID, SRV-ID, URI-ID, or any\n> > * application-specific identifier types supported by the client.\")\n> > \n> > Actually the patch searches for a match of IP address connhost from\n> > dNSName SANs even if iPAddress SANs exist. I think we've not\n> > explicitly defined thebehavior in that case.\n> \n> That's a good point; I didn't change the prior behavior. I feel more\n> comfortable leaving that check, since it is technically possible to\n> push something that looks like an IP address into a dNSName SAN. We\n> should probably make an explicit decision on that, as you say.\n> \n> But I don't think that contradicts the code comment, does it? The\n> comment is just talking about CN fallback scenarios. If you find a\n> match in a dNSName, there's no reason to fall back to the CN.\n\nThe comment explains the spec correctly. From a practical view, the\nbehavior above doesn't seem to make things insecure. So I don't have\na strong opinion on the choice of the behaviors.\n\nThe only thing I'm concerned here is the possibility that the decision\ncorners us to some uncomfortable state between the RFC and our spec in\nfuture. On the other hand, changing the behavior can immediately make\nsomeone uncomfortable.\n\nSo, I'd like to leave it to committers:p\n\n> > I supposed that we only\n> > be deviant in the case \"IP address connhost and no SANs of any type\n> > exists\". What do you think about it?\n> \n> We fall back in the case of \"IP address connhost and dNSName SANs\n> exist\", which is prohibited by that part of RFC 6125. I don't think we\n> deviate in the case you described; can you explain further?\n\nIn that case, i.e., connhost is IP address and no SANs exist at all,\nwe go to CN. On the other hand in RFC6125:\n\nrfc6125> In some cases, the URI is specified as an IP address rather\nrfc6125> than a hostname. In this case, the iPAddress subjectAltName\nrfc6125> must be present in the certificate and must exactly match the\nrfc6125> IP in the URI.\n\nIt (seems to me) denies that behavior. Regardless of the existence of\nother types of SANs, iPAddress is required if connname is an IP\naddress. (That is, it doesn't seem to me that there's any context\nlike \"if any SANs exists\", but I'm not so sure I read it perfectly.)\n\n> > - For the certificate that have only dNSNames or no SANs presented, we\n> > serach for a match from all dNSNames if any or otherwise try CN\n> > regardless of the type of connhost.\n> \n> Correct. (I don't find that way of dividing up the cases very\n> intuitive, though.)\n\nYeah, it's the same decision to the above. It doesn't matter in the\nsecurity view (if the cert issuer is sane) but we could be cornerd in\nfuture.\n\n> > - Otherwise (the cert has at least one iPAddress SANs) we follow the RFCs.\n> > \n> > - For IP-addr connhost, we search only the iPAddress SANs.\n> \n> We search the dNSNames as well, as you pointed out above. But we don't\n> fall back to the CN.\n\nUr. Yes.\n\n> > - For DNSName connhost, we search only dNSName SANs if any or\n> > otherwise try CN.\n> \n> Effectively, yes. (We call the IP address verification functions too,\n> to get alt_name, but they can't match. If that's too confusing, we'd\n> need to pull the alt_name handling up out of the verification layer.)\n\nYes, I meant that.\n\n> > Honestly I didn't consider to that detail. On second thought, with\n> > this specification we cannot decide the behavior unless we scanned all\n> > SANs.\n> \n> Right.\n> \n> > Maybe we can find an elegant implement but I don't think people\n> > here would welcome even that level of complexity needed only for that\n> > dubious existing use case.\n> \n> Which use case do you mean?\n\n\"dubious\". So I meant that the use case where dNSNames is expected to\nmatch with an IP address.\n\n> > What do you think about this? And I'd like to hear from others.\n> \n> I think we need to decide whether or not to keep the current \"IP\n> address connhost can match a dNSName SAN\" behavior, and if so I need to\n> add it to the test cases. (And we need to figure out why the tests are\n> failing in your build, of course.)\n\nThanks. All behaviors and theier reasons is now clear. So....\n\nLet's leave them for committers for now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Mar 2022 15:30:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 2022-03-14 at 15:30 +0900, Kyotaro Horiguchi wrote:\r\n> t/003_sslinfo.pl ... 1/? # Tests were run but no plan was declared and done_testing() was not seen.\r\n> # Looks like your test exited with 29 just after 6.\r\n> t/003_sslinfo.pl ... Dubious, test returned 29 (wstat 7424, 0x1d00)\r\n> All 6 subtests passed \r\n> ...\r\n> Result: FAIL\r\n> \r\n> The script complains like this:\r\n> \r\n> ok 6 - ssl_client_cert_present() for connection with cert\r\n> connection error: 'psql: error: connection to server at \"127.0.0.1\", port 62656 failed: SSL error: tlsv1 alert unknown ca'\r\n> while running 'psql -XAtq -d sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=trustdb hostaddr=127.0.0.1 user=ssltestuser host=localhost -f - -v ON_ERROR_STOP=1' at /home/horiguti/work/worktrees/ipsan/src/test/ssl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1873.\r\n> \r\n> So, psql looks like disliking the ca certificate. I also will dig\r\n> into that.\r\n\r\nHmm, the sslinfo tests are failing? I wouldn't have expected that based\r\non the patch changes. Just to confirm -- they pass for you without the\r\npatch?\r\n\r\n> > > Mmm. after the end of the loop, rc is non-zero only when the loop has\r\n> > > exited by the break and otherwise rc is zero. Why isn't it equivalent\r\n> > > to setting check_cn to false at the break?\r\n> > \r\n> > check_cn can be false if rc is zero, too; it means that we found a SAN\r\n> > of the correct type but it didn't match.\r\n> \r\n> Don't we count unmatched certs as \"existed\"? In that case I think we\r\n> don't go to CN.\r\n\r\nUnmatched names, you mean? I'm not sure I understand.\r\n\r\nIf it helps, the two tests that will fail if check_cn is unset only at\r\nthe break are\r\n\r\n- certificate with both a CN and SANs ignores CN\r\n- certificate with both an IP CN and IP SANs ignores CN\r\n\r\nbecause none of the SANs would match in that case.\r\n\r\n> > > Actually the patch searches for a match of IP address connhost from\r\n> > > dNSName SANs even if iPAddress SANs exist. I think we've not\r\n> > > explicitly defined thebehavior in that case.\r\n> > \r\n> > That's a good point; I didn't change the prior behavior. I feel more\r\n> > comfortable leaving that check, since it is technically possible to\r\n> > push something that looks like an IP address into a dNSName SAN. We\r\n> > should probably make an explicit decision on that, as you say.\r\n> > \r\n> > But I don't think that contradicts the code comment, does it? The\r\n> > comment is just talking about CN fallback scenarios. If you find a\r\n> > match in a dNSName, there's no reason to fall back to the CN.\r\n> \r\n> The comment explains the spec correctly. From a practical view, the\r\n> behavior above doesn't seem to make things insecure. So I don't have\r\n> a strong opinion on the choice of the behaviors.\r\n> \r\n> The only thing I'm concerned here is the possibility that the decision\r\n> corners us to some uncomfortable state between the RFC and our spec in\r\n> future. On the other hand, changing the behavior can immediately make\r\n> someone uncomfortable.\r\n> \r\n> So, I'd like to leave it to committers:p\r\n\r\nSounds good. I'll work on adding tests for the current behavior, and if\r\nthe committers don't like it, we can change it.\r\n\r\n> > > I supposed that we only\r\n> > > be deviant in the case \"IP address connhost and no SANs of any type\r\n> > > exists\". What do you think about it?\r\n> > \r\n> > We fall back in the case of \"IP address connhost and dNSName SANs\r\n> > exist\", which is prohibited by that part of RFC 6125. I don't think we\r\n> > deviate in the case you described; can you explain further?\r\n> \r\n> In that case, i.e., connhost is IP address and no SANs exist at all,\r\n> we go to CN. On the other hand in RFC6125:\r\n> \r\n> rfc6125> In some cases, the URI is specified as an IP address rather\r\n> rfc6125> than a hostname. In this case, the iPAddress subjectAltName\r\n> rfc6125> must be present in the certificate and must exactly match the\r\n> rfc6125> IP in the URI.\r\n> \r\n> It (seems to me) denies that behavior. Regardless of the existence of\r\n> other types of SANs, iPAddress is required if connname is an IP\r\n> address. (That is, it doesn't seem to me that there's any context\r\n> like \"if any SANs exists\", but I'm not so sure I read it perfectly.)\r\n\r\nI see what you mean now. Yes, we deviate there as well (and have done\r\nso for a while now). I think breaking compatibility there would\r\nprobably not go over well.\r\n\r\n> Thanks. All behaviors and theier reasons is now clear. So....\r\n> \r\n> Let's leave them for committers for now.\r\n\r\nThank you for the review!\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 15 Mar 2022 21:41:49 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Tue, 2022-03-15 at 21:41 +0000, Jacob Champion wrote:\r\n> Sounds good. I'll work on adding tests for the current behavior, and if\r\n> the committers don't like it, we can change it.\r\n\r\nDone in v7, attached.\r\n\r\n--Jacob", "msg_date": "Tue, 15 Mar 2022 23:24:08 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Tue, 15 Mar 2022 21:41:49 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Mon, 2022-03-14 at 15:30 +0900, Kyotaro Horiguchi wrote:\n> > t/003_sslinfo.pl ... 1/? # Tests were run but no plan was declared and done_testing() was not seen.\n> > # Looks like your test exited with 29 just after 6.\n> > t/003_sslinfo.pl ... Dubious, test returned 29 (wstat 7424, 0x1d00)\n> > All 6 subtests passed \n> > ...\n> > Result: FAIL\n> > \n> > The script complains like this:\n> > \n> > ok 6 - ssl_client_cert_present() for connection with cert\n> > connection error: 'psql: error: connection to server at \"127.0.0.1\", port 62656 failed: SSL error: tlsv1 alert unknown ca'\n> > while running 'psql -XAtq -d sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=trustdb hostaddr=127.0.0.1 user=ssltestuser host=localhost -f - -v ON_ERROR_STOP=1' at /home/horiguti/work/worktrees/ipsan/src/test/ssl/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 1873.\n> > \n> > So, psql looks like disliking the ca certificate. I also will dig\n> > into that.\n> \n> Hmm, the sslinfo tests are failing? I wouldn't have expected that based\n> on the patch changes. Just to confirm -- they pass for you without the\n> patch?\n\nMmm.... I'm not sure how come I didn't noticed that, master also fails\nfor me fo the same reason. In the past that may fail when valid\nclinent-certs exists in the users ~/.postgresql but I believe that has\nbeen fixed.\n\n\n> > > > Mmm. after the end of the loop, rc is non-zero only when the loop has\n> > > > exited by the break and otherwise rc is zero. Why isn't it equivalent\n> > > > to setting check_cn to false at the break?\n> > > \n> > > check_cn can be false if rc is zero, too; it means that we found a SAN\n> > > of the correct type but it didn't match.\n\nI'm not discussing on the meaning. Purely on the logical equivalancy\nof the two ways to decide whether to visit CN.\n\nConcretely, the equivalancy between this:\n\n=====\n check_cn = true;\n rc = 0;\n for (i < san_len)\n {\n if (type matches) check_cn = false;\n if (some error happens) rc = nonzero;\n \n if (rc != 0)\n break;\n }\n!if ((rc == 0) && check_cn) {}\n=====\n\nand this.\n\n=====\n check_cn = true;\n rc = 0;\n for (i < san_len)\n {\n if (type matches) check_cn = false;\n if (some error happens) rc = nonzero;\n\n if (rc != 0)\n {\n! check_cn = false;\n break;\n }\n }\n!if (check_cn) {}\n=====\n\nThe two are equivalant to me. And if it is, the latter form smees\nsimpler and clearner to me.\n\n> > > check_cn can be false if rc is zero, too; it means that we found a SAN\n> > > of the correct type but it didn't match.\n> \n> > Don't we count unmatched certs as \"existed\"? In that case I think we\n> > don't go to CN.\n> \n> Unmatched names, you mean? I'm not sure I understand.\n\nSorry, I was confused here. Please ignore that. I shoudl have said\nsomething like the following instead.\n\n> check_cn can be false if rc is zero, too; it means that we found a SAN\n> of the correct type but it didn't match.\n\nYes, in that case we don't visit CN because check_cn is false even if\nwe don't exit by (rc != 0) and that behavior is not changed by my\nproposal.\n\n> > > > I supposed that we only\n> > > > be deviant in the case \"IP address connhost and no SANs of any type\n> > > > exists\". What do you think about it?\n> > > \n> > > We fall back in the case of \"IP address connhost and dNSName SANs\n> > > exist\", which is prohibited by that part of RFC 6125. I don't think we\n> > > deviate in the case you described; can you explain further?\n> > \n> > In that case, i.e., connhost is IP address and no SANs exist at all,\n> > we go to CN. On the other hand in RFC6125:\n> > \n> > rfc6125> In some cases, the URI is specified as an IP address rather\n> > rfc6125> than a hostname. In this case, the iPAddress subjectAltName\n> > rfc6125> must be present in the certificate and must exactly match the\n> > rfc6125> IP in the URI.\n> > \n> > It (seems to me) denies that behavior. Regardless of the existence of\n> > other types of SANs, iPAddress is required if connname is an IP\n> > address. (That is, it doesn't seem to me that there's any context\n> > like \"if any SANs exists\", but I'm not so sure I read it perfectly.)\n> \n> I see what you mean now. Yes, we deviate there as well (and have done\n> so for a while now). I think breaking compatibility there would\n> probably not go over well.\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Mar 2022 15:56:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Wed, 16 Mar 2022 15:56:02 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Mmm.... I'm not sure how come I didn't noticed that, master also fails\n> for me fo the same reason. In the past that may fail when valid\n> clinent-certs exists in the users ~/.postgresql but I believe that has\n> been fixed.\n\nAnd finally my fear found to be true.. The test doesn't fail after\nremoving files under ~/.postgresql, which should not happen.\n\nI'll fix it apart from this.\nSorry for the confusion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Mar 2022 16:04:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Wed, 2022-03-16 at 15:56 +0900, Kyotaro Horiguchi wrote:\r\n> At Tue, 15 Mar 2022 21:41:49 +0000, Jacob Champion <pchampion@vmware.com> wrote in \r\n> > Hmm, the sslinfo tests are failing? I wouldn't have expected that based\r\n> > on the patch changes. Just to confirm -- they pass for you without the\r\n> > patch?\r\n> \r\n> Mmm.... I'm not sure how come I didn't noticed that, master also fails\r\n> for me fo the same reason. In the past that may fail when valid\r\n> clinent-certs exists in the users ~/.postgresql but I believe that has\r\n> been fixed.\r\n\r\nGood to know; I was worried that I'd messed up something well outside\r\nthe code I'd touched.\r\n\r\n> > > > > Mmm. after the end of the loop, rc is non-zero only when the loop has\r\n> > > > > exited by the break and otherwise rc is zero. Why isn't it equivalent\r\n> > > > > to setting check_cn to false at the break?\r\n> > > > \r\n> > > > check_cn can be false if rc is zero, too; it means that we found a SAN\r\n> > > > of the correct type but it didn't match.\r\n> \r\n> I'm not discussing on the meaning. Purely on the logical equivalancy\r\n> of the two ways to decide whether to visit CN.\r\n> \r\n> Concretely, the equivalancy between this: [snip]\r\n\r\nThank you for the explanation -- the misunderstanding was all on my\r\nend. I thought you were asking me to move the check_cn assignment\r\ninstead of copying it to the end. I agree that your suggestion is much\r\nclearer, and I'll make that change tomorrow.\r\n\r\nThanks!\r\n--Jacob\r\n", "msg_date": "Wed, 16 Mar 2022 23:49:48 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Wed, 2022-03-16 at 23:49 +0000, Jacob Champion wrote:\r\n> Thank you for the explanation -- the misunderstanding was all on my\r\n> end. I thought you were asking me to move the check_cn assignment\r\n> instead of copying it to the end. I agree that your suggestion is much\r\n> clearer, and I'll make that change tomorrow.\r\n\r\nDone in v8. Thanks again for your suggestions (and for your\r\nperseverance when I didn't get it)!\r\n\r\n--Jacob", "msg_date": "Thu, 17 Mar 2022 21:55:07 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Thu, 17 Mar 2022 21:55:07 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Wed, 2022-03-16 at 23:49 +0000, Jacob Champion wrote:\n> > Thank you for the explanation -- the misunderstanding was all on my\n> > end. I thought you were asking me to move the check_cn assignment\n> > instead of copying it to the end. I agree that your suggestion is much\n> > clearer, and I'll make that change tomorrow.\n> \n> Done in v8. Thanks again for your suggestions (and for your\n> perseverance when I didn't get it)!\n\nThanks! .. and some nitpicks..(Sorry)\n\nfe-secure-common.c doesn't need netinet/in.h.\n\n\n+++ b/src/include/utils/inet.h\n.. \n+#include \"common/inet-common.h\"\n\nI'm not sure about the project policy on #include practice, but I\nthink it is the common practice not to include headers that are not\nrequired by the file itself. In this case, fe-secure-common.h itself\ndoesn't need the include. Instead, fe-secure-openssl.c and\nfe-secure-common.c needs the include.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 18 Mar 2022 16:38:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Fri, 18 Mar 2022 16:38:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 17 Mar 2022 21:55:07 +0000, Jacob Champion <pchampion@vmware.com> wrot> Thanks! .. and some nitpicks..(Sorry)\n> \n> fe-secure-common.c doesn't need netinet/in.h.\n> \n> \n> +++ b/src/include/utils/inet.h\n> .. \n> +#include \"common/inet-common.h\"\n> \n> I'm not sure about the project policy on #include practice, but I\n> think it is the common practice not to include headers that are not\n> required by the file itself. In this case, fe-secure-common.h itself\n> doesn't need the include. Instead, fe-secure-openssl.c and\n> fe-secure-common.c needs the include.\n\nI noticed that this doesn't contain doc changes.\n\nhttps://www.postgresql.org/docs/current/libpq-ssl.html\n\n> In verify-full mode, the host name is matched against the\n> certificate's Subject Alternative Name attribute(s), or against the\n> Common Name attribute if no Subject Alternative Name of type dNSName\n> is present. If the certificate's name attribute starts with an\n> asterisk (*), the asterisk will be treated as a wildcard, which will\n> match all characters except a dot (.). This means the certificate will\n> not match subdomains. If the connection is made using an IP address\n> instead of a host name, the IP address will be matched (without doing\n> any DNS lookups).\n\nThis refers to dNSName, so we should revise this so that it describes\nthe new behavior.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Mar 2022 13:32:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Tue, 2022-03-22 at 13:32 +0900, Kyotaro Horiguchi wrote:\r\n> At Fri, 18 Mar 2022 16:38:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\r\n> > \r\n> > fe-secure-common.c doesn't need netinet/in.h.\r\n> > \r\n> > \r\n> > +++ b/src/include/utils/inet.h\r\n> > ..\r\n> > +#include \"common/inet-common.h\"\r\n> > \r\n> > I'm not sure about the project policy on #include practice, but I\r\n> > think it is the common practice not to include headers that are not\r\n> > required by the file itself. In this case, fe-secure-common.h itself\r\n> > doesn't need the include. Instead, fe-secure-openssl.c and\r\n> > fe-secure-common.c needs the include.\r\n\r\nThanks, looks like I had some old header dependencies left over from\r\nseveral versions ago. Fixed in v9.\r\n\r\n> I noticed that this doesn't contain doc changes.\r\n> \r\n> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Fcurrent%2Flibpq-ssl.html&amp;data=04%7C01%7Cpchampion%40vmware.com%7Cb25566c0f0124a30221908da0bbcec13%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637835203290105956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=sZuKc9UxmW1oZQij%2F%2F91rkEF57BZiQebkXtvEt%2FdROU%3D&amp;reserved=0\r\n> \r\n> > In verify-full mode, the host name is matched against the\r\n> > certificate's Subject Alternative Name attribute(s), or against the\r\n> > Common Name attribute if no Subject Alternative Name of type dNSName\r\n> > is present. If the certificate's name attribute starts with an\r\n> > asterisk (*), the asterisk will be treated as a wildcard, which will\r\n> > match all characters except a dot (.). This means the certificate will\r\n> > not match subdomains. If the connection is made using an IP address\r\n> > instead of a host name, the IP address will be matched (without doing\r\n> > any DNS lookups).\r\n> \r\n> This refers to dNSName, so we should revise this so that it describes\r\n> the new behavior.\r\n\r\nv9 contains the bare minimum but I don't think it's quite enough. How\r\nmuch of the behavior (and edge cases) do you think we should detail\r\nhere? All of it?\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Tue, 22 Mar 2022 20:42:37 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Tue, 22 Mar 2022 20:42:37 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> Thanks, looks like I had some old header dependencies left over from\n> several versions ago. Fixed in v9.\n\nThanks! Looks perfect.\n\n> v9 contains the bare minimum but I don't think it's quite enough. How\n> much of the behavior (and edge cases) do you think we should detail\n> here? All of it?\n\nI tried to write out the doc part. What do you think about it?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml\nindex 3998b1781b..13e3e63768 100644\n--- a/doc/src/sgml/libpq.sgml\n+++ b/doc/src/sgml/libpq.sgml\n@@ -8342,16 +8342,31 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*)\n \n <para>\n In <literal>verify-full</literal> mode, the host name is matched against the\n- certificate's Subject Alternative Name attribute(s), or against the\n- Common Name attribute if no Subject Alternative Name of type <literal>dNSName</literal> is\n+ certificate's Subject Alternative Name attribute(s) (SAN), or against the\n+ Common Name attribute if no SAN of type <literal>dNSName</literal> is\n present. If the certificate's name attribute starts with an asterisk\n (<literal>*</literal>), the asterisk will be treated as\n a wildcard, which will match all characters <emphasis>except</emphasis> a dot\n (<literal>.</literal>). This means the certificate will not match subdomains.\n If the connection is made using an IP address instead of a host name, the\n- IP address will be matched (without doing any DNS lookups).\n+ IP address will be matched (without doing any DNS lookups) against SANs of\n+ type <literal>iPAddress</literal> or <literal>dNSName</literal>. If no\n+ <literal>ipAddress</literal> SAN is present and no\n+ matching <literal>dNSName</literal> SAN is present, the host IP address is\n+ matched against the Common Name attribute.\n </para>\n \n+ <note>\n+ <para>\n+ For backward compatibility with earlier versions of PostgreSQL, the host\n+ IP address is verified in a manner different\n+ from <ulink url=\"https://tools.ietf.org/html/rfc6125\">RFC 6125</ulink>.\n+ The host IP address is always matched against <literal>dNSName</literal>\n+ SANs as well as <literal>iPAdress</literal> SANs, and can be matched\n+ against the Common Name attribute for a certain condition.\n+ </para>\n+ </note>\n+\n <para>\n To allow server certificate verification, one or more root certificates\n must be placed in the file <filename>~/.postgresql/root.crt</filename>", "msg_date": "Wed, 23 Mar 2022 14:20:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Wed, 2022-03-23 at 14:20 +0900, Kyotaro Horiguchi wrote:\r\n> I tried to write out the doc part. What do you think about it?\r\n\r\nI like it, thanks! I've applied that in v10, with a tweak to two\r\niPAddress spellings and a short expansion of the condition in the Note,\r\nand I've added you as a co-author to 0002.\r\n\r\n--Jacob", "msg_date": "Wed, 23 Mar 2022 23:52:06 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "At Wed, 23 Mar 2022 23:52:06 +0000, Jacob Champion <pchampion@vmware.com> wrote in \n> On Wed, 2022-03-23 at 14:20 +0900, Kyotaro Horiguchi wrote:\n> > I tried to write out the doc part. What do you think about it?\n> \n> I like it, thanks! I've applied that in v10, with a tweak to two\n> iPAddress spellings and a short expansion of the condition in the Note,\n> and I've added you as a co-author to 0002.\n\nI'm fine with it. Thanks. I marked it as Ready-for-Commiter.\n\nNote for the patch set:\n\n0001 is preliminary patch to move inet_pton out of src/backend tree. \n\n0002 is the main patch of this patchset\n\n0003 is optional, which introduces pg_inet_pton() only works for IPv6\n addresses. 0002 gets the same effect by the following use of\n pg_inet_net_pton().\n\n > if (!strchr(host, '/')\n > && pg_inet_net_pton(PGSQL_AF_INET6, host, addr, -1) == 128)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 24 Mar 2022 17:10:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2022-03-24 at 17:10 +0900, Kyotaro Horiguchi wrote:\r\n> I'm fine with it. Thanks. I marked it as Ready-for-Commiter.\r\n\r\nThank you for the reviews and feedback!\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 24 Mar 2022 15:36:51 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> [ v10-0001-Move-inet_net_pton-to-src-port.patch etc ]\n\nThere is something broken about the ssl tests as modified by\nthis patch. The cfbot doesn't provide a lot of evidence about\nwhy it's failing, but I applied the patchset locally and what\nI see is\n\n...\nok 47 - mismatch between host name and server certificate sslmode=verify-full: m\natches\nOdd number of elements in hash assignment at /home/postgres/pgsql/src/test/ssl/t\n/SSL/Server.pm line 288.\nUse of uninitialized value in concatenation (.) or string at /home/postgres/pgsq\nl/src/test/ssl/t/SSL/Backend/OpenSSL.pm line 178.\nUse of uninitialized value in concatenation (.) or string at /home/postgres/pgsq\nl/src/test/ssl/t/SSL/Backend/OpenSSL.pm line 178.\n### Restarting node \"primary\"\n# Running: pg_ctl -w -D /home/postgres/pgsql/src/test/ssl/tmp_check/t_001_ssltes\nts_primary_data/pgdata -l /home/postgres/pgsql/src/test/ssl/tmp_check/log/001_ss\nltests_primary.log restart\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... stopped waiting\npg_ctl: could not start server\n\nThe tail end of the server log is\n\n2022-03-27 17:13:11.482 EDT [551720] FATAL: could not load server certificate file \".crt\": No such file or directory\n\nso it seems pretty clear that something's fouling up computation of\na certificate file name. This may be caused by 9ca234bae or\n4a7e964fc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 27 Mar 2022 17:19:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "> On 27 Mar 2022, at 23:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> This may be caused by 9ca234bae or 4a7e964fc.\n\n\nI'd say 4a7e964fc is the culprit here. From a quick skim the the\nswitch_server_cert() calls need to be changed along the lines of:\n\n from: switch_server_cert($node, 'server-ip-in-dnsname');\n to: switch_server_cert($node, certfile => 'server-ip-in-dnsname');\n\nThere migth be more changes required, that was the one that stood out. Unless\nsomeone beats me to it I'll take a look at fixing up the test in this patch\ntomorrow.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 28 Mar 2022 00:44:07 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "Hi,\n\nOn 2022-03-27 17:19:27 -0400, Tom Lane wrote:\n> The cfbot doesn't provide a lot of evidence about\n> why it's failing, but I applied the patchset locally and what\n> I see is\n\nFWIW - and I agree that's not nice user interface wise - just below the cpu /\nmemory graphs there's a \"directory browser\", allowing you to navigate to the\nsaved log files. Navigating to log/src/test/ssl/tmp_check/log allows you to\ndownload\nhttps://api.cirrus-ci.com/v1/artifact/task/5261015175659520/log/src/test/ssl/tmp_check/log/regress_log_001_ssltests\nhttps://api.cirrus-ci.com/v1/artifact/task/5261015175659520/log/src/test/ssl/tmp_check/log/001_ssltests_primary.log\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 27 Mar 2022 17:54:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "> On 28 Mar 2022, at 00:44, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> I'll take a look at fixing up the test in this patch tomorrow.\n\nFixing up the switch_server_cert() calls and using default_ssl_connstr makes\nthe test pass for me. The required fixes are in the supplied 0004 diff, I kept\nthem separate to allow the original author to incorporate them without having\nto dig them out to see what changed (named to match the git format-patch output\nsince I think the CFBot just applies the patches in alphabetical order).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 28 Mar 2022 11:17:25 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 28 Mar 2022 at 05:17, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> named to match the git format-patch output\n> since I think the CFBot just applies the patches in alphabetical order).\n\nThe first patch doesn't seem to actually apply though so it doesn't\nget to the subsequent patches.\n\nhttp://cfbot.cputube.org/patch_37_3458.log\n\n=== Applying patches on top of PostgreSQL commit ID\ne26114c817b610424010cfbe91a743f591246ff1 ===\n=== applying patch ./v10-0001-Move-inet_net_pton-to-src-port.patch\npatching file src/backend/utils/adt/Makefile\nHunk #1 FAILED at 44.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/utils/adt/Makefile.rej\npatching file src/include/port.h\npatching file src/include/utils/builtins.h\npatching file src/port/Makefile\npatching file src/port/inet_net_pton.c (renamed from\nsrc/backend/utils/adt/inet_net_pton.c)\npatching file src/tools/msvc/Mkvcbuild.pm\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:43:43 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Mon, 2022-03-28 at 11:17 +0200, Daniel Gustafsson wrote:\r\n> Fixing up the switch_server_cert() calls and using default_ssl_connstr makes\r\n> the test pass for me. The required fixes are in the supplied 0004 diff, I kept\r\n> them separate to allow the original author to incorporate them without having\r\n> to dig them out to see what changed (named to match the git format-patch output\r\n> since I think the CFBot just applies the patches in alphabetical order).\r\n\r\nThanks! Those changes look good to me; I've folded them into v11. This\r\nis rebased on a newer HEAD so it should fix the apply failures that\r\nGreg pointed out.\r\n\r\n--Jacob", "msg_date": "Mon, 28 Mar 2022 20:21:17 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On 28.03.22 22:21, Jacob Champion wrote:\n> On Mon, 2022-03-28 at 11:17 +0200, Daniel Gustafsson wrote:\n>> Fixing up the switch_server_cert() calls and using default_ssl_connstr makes\n>> the test pass for me. The required fixes are in the supplied 0004 diff, I kept\n>> them separate to allow the original author to incorporate them without having\n>> to dig them out to see what changed (named to match the git format-patch output\n>> since I think the CFBot just applies the patches in alphabetical order).\n> \n> Thanks! Those changes look good to me; I've folded them into v11. This\n> is rebased on a newer HEAD so it should fix the apply failures that\n> Greg pointed out.\n\nI'm not happy about how inet_net_pton.o is repurposed here. That code \nis clearly meant to support backend data types with specifically. Code like\n\n+ /*\n+ * pg_inet_net_pton() will accept CIDR masks, which we don't want to\n+ * match, so skip the comparison if the host string contains a slash.\n+ */\n\nindicates that we are fighting against the API. Also, if someone ever \nwants to change how those backend data types work, we then have to check \na bunch of other code as well.\n\nI think we should be using inet_ntop()/inet_pton() directly here. We \ncan throw substitute implementations into libpgport if necessary, that's \nnot so difficult.\n\n\n", "msg_date": "Wed, 30 Mar 2022 13:37:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Wed, 2022-03-30 at 13:37 +0200, Peter Eisentraut wrote:\r\n> On 28.03.22 22:21, Jacob Champion wrote:\r\n> > On Mon, 2022-03-28 at 11:17 +0200, Daniel Gustafsson wrote:\r\n> > > Fixing up the switch_server_cert() calls and using default_ssl_connstr makes\r\n> > > the test pass for me. The required fixes are in the supplied 0004 diff, I kept\r\n> > > them separate to allow the original author to incorporate them without having\r\n> > > to dig them out to see what changed (named to match the git format-patch output\r\n> > > since I think the CFBot just applies the patches in alphabetical order).\r\n> > \r\n> > Thanks! Those changes look good to me; I've folded them into v11. This\r\n> > is rebased on a newer HEAD so it should fix the apply failures that\r\n> > Greg pointed out.\r\n> \r\n> I'm not happy about how inet_net_pton.o is repurposed here. That code\r\n> is clearly meant to support backend data types with specifically. Code like\r\n> \r\n> + /*\r\n> + * pg_inet_net_pton() will accept CIDR masks, which we don't want to\r\n> + * match, so skip the comparison if the host string contains a slash.\r\n> + */\r\n> \r\n> indicates that we are fighting against the API.\r\n\r\nHoriguchi-san had the same concern upthread, I think. I replaced that\r\ncode in the next patch, but it was enough net-new stuff that I kept the\r\npatches separate instead of merging them. I can change that if it's not\r\nhelpful for review.\r\n\r\n> Also, if someone ever\r\n> wants to change how those backend data types work, we then have to check\r\n> a bunch of other code as well.\r\n> \r\n> I think we should be using inet_ntop()/inet_pton() directly here. We\r\n> can throw substitute implementations into libpgport if necessary, that's\r\n> not so difficult.\r\n\r\nIs this request satisfied by the implementation of pg_inet_pton() in\r\npatch 0003? If not, what needs to change?\r\n\r\nThanks,\r\n--Jacob\r\n", "msg_date": "Wed, 30 Mar 2022 16:17:18 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On 30.03.22 18:17, Jacob Champion wrote:\n>> Also, if someone ever\n>> wants to change how those backend data types work, we then have to check\n>> a bunch of other code as well.\n>>\n>> I think we should be using inet_ntop()/inet_pton() directly here. We\n>> can throw substitute implementations into libpgport if necessary, that's\n>> not so difficult.\n> Is this request satisfied by the implementation of pg_inet_pton() in\n> patch 0003? If not, what needs to change?\n\nWhy add a (failry complicated) pg_inet_pton() when a perfectly \nreasonable inet_pton() exists?\n\nI would get rid of all that refactoring and just have your code call \ninet_pton()/inet_ntop() directly.\n\nIf you're worried about portability, and you don't want to go through \nthe effort of proving libpgport substitutes, just have your code raise \nan error in the \"#else\" code paths. We can fill that in later if there \nis demand.\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:32:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Thu, 2022-03-31 at 16:32 +0200, Peter Eisentraut wrote:\r\n> Why add a (failry complicated) pg_inet_pton() when a perfectly\r\n> reasonable inet_pton() exists?\r\n\r\nI think it was mostly just that inet_aton() and pg_inet_net_ntop() both\r\nhad ports, and I figured I might as well port the other one since we\r\nalready had the implementation. (I don't have a good intuition yet for\r\nthe community's preference for port vs dependency.)\r\n\r\n> I would get rid of all that refactoring and just have your code call\r\n> inet_pton()/inet_ntop() directly.\r\n> \r\n> If you're worried about portability, and you don't want to go through\r\n> the effort of proving libpgport substitutes, just have your code raise\r\n> an error in the \"#else\" code paths. We can fill that in later if there\r\n> is demand.\r\n\r\nSwitched to inet_pton() in v12, with no #if/else for now. I think this\r\nshould work with Winsock as-is; let's see if the bot agrees...\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Thu, 31 Mar 2022 18:15:25 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On 31.03.22 20:15, Jacob Champion wrote:\n> On Thu, 2022-03-31 at 16:32 +0200, Peter Eisentraut wrote:\n>> Why add a (failry complicated) pg_inet_pton() when a perfectly\n>> reasonable inet_pton() exists?\n> \n> I think it was mostly just that inet_aton() and pg_inet_net_ntop() both\n> had ports, and I figured I might as well port the other one since we\n> already had the implementation. (I don't have a good intuition yet for\n> the community's preference for port vs dependency.)\n> \n>> I would get rid of all that refactoring and just have your code call\n>> inet_pton()/inet_ntop() directly.\n>>\n>> If you're worried about portability, and you don't want to go through\n>> the effort of proving libpgport substitutes, just have your code raise\n>> an error in the \"#else\" code paths. We can fill that in later if there\n>> is demand.\n> \n> Switched to inet_pton() in v12, with no #if/else for now. I think this\n> should work with Winsock as-is; let's see if the bot agrees...\n\nI have committed this.\n\nI have removed the inet header refactoring that you had. That wasn't \nnecessary, since pg_inet_net_ntop() can use the normal AF_INET* \nconstants. The PGSQL_AF_INET* constants are only for the internal \nstorage of the inet/cidr types.\n\nI have added a configure test for inet_pton(). We can check in the \nbuild farm if it turns out to be necessary.\n\n\n", "msg_date": "Fri, 1 Apr 2022 16:07:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" }, { "msg_contents": "On Fri, 2022-04-01 at 16:07 +0200, Peter Eisentraut wrote:\r\n> I have committed this.\r\n> \r\n> I have removed the inet header refactoring that you had. That wasn't\r\n> necessary, since pg_inet_net_ntop() can use the normal AF_INET*\r\n> constants. The PGSQL_AF_INET* constants are only for the internal\r\n> storage of the inet/cidr types.\r\n> \r\n> I have added a configure test for inet_pton(). We can check in the\r\n> build farm if it turns out to be necessary.\r\n\r\nThanks!\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 1 Apr 2022 15:24:30 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Accept IP addresses in server certificate SANs" } ]
[ { "msg_contents": "Hi,\n\nApple's ranlib doesn't like empty translation units[1], but\nprotocol_openssl.c doesn't define any symbols (unless you have an\nancient EOL'd openssl), so longfin and CI say:\n\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:\nfile: libpgcommon.a(protocol_openssl.o) has no symbols\n\nI guess we still can't switch to (Apple) libtool. Maybe configure\nshould be doing a test and adding it to LIBOBJS or a similar variable\nonly if necessary, or something like that?\n\n[1] https://www.postgresql.org/message-id/flat/28521.1426352337%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 16 Dec 2021 15:44:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Apple's ranlib doesn't like empty translation units[1], but\n> protocol_openssl.c doesn't define any symbols (unless you have an\n> ancient EOL'd openssl), so longfin and CI say:\n\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:\n> file: libpgcommon.a(protocol_openssl.o) has no symbols\n\n> I guess we still can't switch to (Apple) libtool. Maybe configure\n> should be doing a test and adding it to LIBOBJS or a similar variable\n> only if necessary, or something like that?\n\nHmm ... right now, with only one test to make, the configure change\nwouldn't be that hard; but that might change in future, plus I'm\nunsure how to do it in MSVC.\n\nA lazy man's answer could be to ensure the translation unit isn't\nempty, say by adding\n\n+#else\n+\n+int dummy_protocol_openssl_variable = 0;\n+\n #endif /* !SSL_CTX_set_min_proto_version */\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 10:25:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "On 16.12.21 16:25, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Apple's ranlib doesn't like empty translation units[1], but\n>> protocol_openssl.c doesn't define any symbols (unless you have an\n>> ancient EOL'd openssl), so longfin and CI say:\n> \n>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:\n>> file: libpgcommon.a(protocol_openssl.o) has no symbols\n> \n>> I guess we still can't switch to (Apple) libtool. Maybe configure\n>> should be doing a test and adding it to LIBOBJS or a similar variable\n>> only if necessary, or something like that?\n> \n> Hmm ... right now, with only one test to make, the configure change\n> wouldn't be that hard; but that might change in future, plus I'm\n> unsure how to do it in MSVC.\n> \n> A lazy man's answer could be to ensure the translation unit isn't\n> empty, say by adding\n\nThese are not errors, right? So why is this a problem?\n\n\n", "msg_date": "Thu, 16 Dec 2021 16:48:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "On Fri, Dec 17, 2021 at 4:48 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> These are not errors, right? So why is this a problem?\n\nYeah they're just warnings. I don't personally care if we just ignore\nthem until we drop OpenSSL < 1.1.0 or macOS < 10.something. I\nmentioned it because in the past we worked to get rid of these sorts\nof warnings (there have been a couple of rounds at least), and they\nshow up more obviously in the CI scripts because they use -s, so the 3\nwarning lines are the only output.\n\n\n", "msg_date": "Fri, 17 Dec 2021 06:34:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Dec 17, 2021 at 4:48 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> These are not errors, right? So why is this a problem?\n\n> Yeah they're just warnings. I don't personally care if we just ignore\n> them until we drop OpenSSL < 1.1.0 or macOS < 10.something. I\n> mentioned it because in the past we worked to get rid of these sorts\n> of warnings (there have been a couple of rounds at least), and they\n> show up more obviously in the CI scripts because they use -s, so the 3\n> warning lines are the only output.\n\nYeah, \"zero chatter from a successful build\" has been a goal\nfor awhile now.\n\nHaving said that, I'm not seeing any such warning when I build\nwith openssl 1.1.1k on my own Mac, so I'm a bit confused why\nThomas sees it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 13:22:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "> On 16 Dec 2021, at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Having said that, I'm not seeing any such warning when I build\n> with openssl 1.1.1k on my own Mac, so I'm a bit confused why\n> Thomas sees it.\n\nMaybe it's dependant on macOS/XCode release? I see the warning on my Catalina\nlaptop.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 21:13:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 16 Dec 2021, at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Having said that, I'm not seeing any such warning when I build\n>> with openssl 1.1.1k on my own Mac, so I'm a bit confused why\n>> Thomas sees it.\n\n> Maybe it's dependant on macOS/XCode release? I see the warning on my Catalina\n> laptop.\n\nCould be. I tried it on Monterey, but not anything older.\n(longfin is still on Big Sur, because I've been lazy about\nupdating it.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 15:38:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "On Fri, Dec 17, 2021 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 16 Dec 2021, at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Having said that, I'm not seeing any such warning when I build\n> >> with openssl 1.1.1k on my own Mac, so I'm a bit confused why\n> >> Thomas sees it.\n>\n> > Maybe it's dependant on macOS/XCode release? I see the warning on my Catalina\n> > laptop.\n>\n> Could be. I tried it on Monterey, but not anything older.\n> (longfin is still on Big Sur, because I've been lazy about\n> updating it.)\n\nHmm. Happened[1] with Andres's CI scripts, which (at least on the\nversion I used here, may not be his latest) runs on macOS Monterey and\ninstalls openssl from brew which is apparently 3.0.0. Wild guess:\nsome versions of openssl define functions, and some define macros, and\nhere we're looking for the macros?\n\nhttps://cirrus-ci.com/task/6100205941555200\n\n\n", "msg_date": "Fri, 17 Dec 2021 14:26:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Hi,\n\nOn 2021-12-17 14:26:53 +1300, Thomas Munro wrote:\n> On Fri, Dec 17, 2021 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Could be. I tried it on Monterey, but not anything older.\n> > (longfin is still on Big Sur, because I've been lazy about\n> > updating it.)\n> \n> Hmm. Happened[1] with Andres's CI scripts, which (at least on the\n> version I used here, may not be his latest) runs on macOS Monterey and\n> installs openssl from brew which is apparently 3.0.0. Wild guess:\n> some versions of openssl define functions, and some define macros, and\n> here we're looking for the macros?\n\nI also see it on an m1 mini I got when building against openssl 3.\n\nThere is -no_warning_for_no_symbols in apple's ranlib. But perhaps\nthere's another way around this:\nWe have ssl_protocol_version_to_openssl() in both be-secure-openssl.c\nand fe-secure-openssl.c. Perhaps we should just move it to\nprotocol_openssl.c?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Dec 2021 16:49:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I also see it on an m1 mini I got when building against openssl 3.\n\nHuh, I wonder why I'm not seeing it.\n\n> There is -no_warning_for_no_symbols in apple's ranlib. But perhaps\n> there's another way around this:\n> We have ssl_protocol_version_to_openssl() in both be-secure-openssl.c\n> and fe-secure-openssl.c. Perhaps we should just move it to\n> protocol_openssl.c?\n\nThose functions have the same name, but not the same arguments,\nso it'd take some refactoring to share any code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Dec 2021 20:03:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" }, { "msg_contents": "Hi,\n\nOn 2021-12-16 21:13:20 +0100, Daniel Gustafsson wrote:\n> > On 16 Dec 2021, at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Having said that, I'm not seeing any such warning when I build\n> > with openssl 1.1.1k on my own Mac, so I'm a bit confused why\n> > Thomas sees it.\n>\n> Maybe it's dependant on macOS/XCode release? I see the warning on my Catalina\n> laptop.\n\nI think it might an x86_64 vs arm64 thing.\n\ncd ~/build/postgres/dev-assert/vpath/src/common\n\n$ cat protocol_openssl.s\n\t.section\t__TEXT,__text,regular,pure_instructions\n\t.build_version macos, 12, 0\tsdk_version 12, 3\n.subsections_via_symbols\n\n$ as -arch arm64 protocol_openssl.s -o protocol_openssl-arm64.o\n$ as -arch x86_64 protocol_openssl.s -o protocol_openssl-x86_64.o\n\n$ llvm-objdump -t protocol_openssl-x86_64.o\n\nprotocol_openssl-x86_64.o:\tfile format mach-o 64-bit x86-64\n\nSYMBOL TABLE:\n\n$ llvm-objdump -t protocol_openssl-arm64.o\n\nprotocol_openssl-arm64.o:\tfile format mach-o arm64\n\nSYMBOL TABLE:\n0000000000000000 l F __TEXT,__text ltmp0\n\n\nFor some reason arm64 ends up with that ltmp0 symbol, which presumably\nprevents the warning from being triggered.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 21:31:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Apple's ranlib warns about protocol_openssl.c" } ]
[ { "msg_contents": "Hi all,\n\nAs per $subject, avoiding the flush of the new cluster's data\ndirectory shortens a bint the runtime of the test. In some of my slow\nVMs, aka Windows, this shaves a couple of seconds even if the bulk of\nthe time is still spent on the main regression test suite.\n\nIn pg_upgrade, we let the flush happen with initdb --sync-only, based\non the binary path of the new cluster, so I think that we are not\ngoing to miss any test coverage by skipping that.\n\nThoughts or opinions?\n--\nMichael", "msg_date": "Thu, 16 Dec 2021 15:50:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Addition of --no-sync to pg_upgrade for test speedup" }, { "msg_contents": "On 16.12.21 07:50, Michael Paquier wrote:\n> As per $subject, avoiding the flush of the new cluster's data\n> directory shortens a bint the runtime of the test. In some of my slow\n> VMs, aka Windows, this shaves a couple of seconds even if the bulk of\n> the time is still spent on the main regression test suite.\n> \n> In pg_upgrade, we let the flush happen with initdb --sync-only, based\n> on the binary path of the new cluster, so I think that we are not\n> going to miss any test coverage by skipping that.\n\nI think that is reasonable.\n\nMaybe we could have some global option, like some environment variable, \nthat enables the \"sync\" mode in all tests, so it's easy to test that \nonce in a while. Not really a requirement for your patch, but an idea \nin case this is a concern.\n\n\n\n", "msg_date": "Fri, 17 Dec 2021 10:21:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Addition of --no-sync to pg_upgrade for test speedup" }, { "msg_contents": "On Fri, Dec 17, 2021 at 10:21:04AM +0100, Peter Eisentraut wrote:\n> On 16.12.21 07:50, Michael Paquier wrote:\n> > As per $subject, avoiding the flush of the new cluster's data\n> > directory shortens a bint the runtime of the test. In some of my slow\n> > VMs, aka Windows, this shaves a couple of seconds even if the bulk of\n> > the time is still spent on the main regression test suite.\n> > \n> > In pg_upgrade, we let the flush happen with initdb --sync-only, based\n> > on the binary path of the new cluster, so I think that we are not\n> > going to miss any test coverage by skipping that.\n> \n> I think that is reasonable.\n> \n> Maybe we could have some global option, like some environment variable, that\n> enables the \"sync\" mode in all tests, so it's easy to test that once in a\n> while. Not really a requirement for your patch, but an idea in case this is\n> a concern.\n\nYes, I think it would be good to see all the places we might want to\npass the no-sync option.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 17 Dec 2021 09:47:05 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Addition of --no-sync to pg_upgrade for test speedup" }, { "msg_contents": "On Fri, Dec 17, 2021 at 09:47:05AM -0500, Bruce Momjian wrote:\n> On Fri, Dec 17, 2021 at 10:21:04AM +0100, Peter Eisentraut wrote:\n>> I think that is reasonable.\n\nThanks. I have applied that, as that really helped here.\n\n>> Maybe we could have some global option, like some environment variable, that\n>> enables the \"sync\" mode in all tests, so it's easy to test that once in a\n>> while. Not really a requirement for your patch, but an idea in case this is\n>> a concern.\n> \n> Yes, I think it would be good to see all the places we might want to\n> pass the no-sync option.\n\nThe remaining places in src/bin/ that I can see are pg_resetwal, where\nwe would fsync() a WAL segment full of zeros, and pg_recvlogical\nOutputFsync(), which does not point to much data, I guess. The first\none may be worth it, but that's just 16MB we are talking about and\nWriteEmptyXLOG() is not a code path taken currently by the tests.\n\nWe could introduce a new environment variable if one wishes to enforce\nthose flushes, say PG_TEST_SYNC, on top of patching any TAP test that\nhas a --no-sync to filter it out.\n--\nMichael", "msg_date": "Sat, 18 Dec 2021 18:30:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of --no-sync to pg_upgrade for test speedup" }, { "msg_contents": "On 2021-Dec-16, Michael Paquier wrote:\n\n> In pg_upgrade, we let the flush happen with initdb --sync-only, based\n> on the binary path of the new cluster, so I think that we are not\n> going to miss any test coverage by skipping that.\n\nThere was one patch of mine with breakage that only manifested in the\npg_upgrade test *because* of its lack of no-sync. I'm afraid that this\nchange would hide certain problems.\nhttps://postgr.es/m/20210130023011.n545o54j65t4kgxn@alap3.anarazel.de\n\n> Thoughts or opinions?\n\nI'm not 100% comfortable with this. What can we do to preserve *some*\ntesting that include syncing? Maybe some option that a few buildfarm\nanimals use?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:46:13 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Addition of --no-sync to pg_upgrade for test speedup" }, { "msg_contents": "On Mon, Dec 20, 2021 at 10:46:13AM -0300, Alvaro Herrera wrote:\n> On 2021-Dec-16, Michael Paquier wrote:\n>> In pg_upgrade, we let the flush happen with initdb --sync-only, based\n>> on the binary path of the new cluster, so I think that we are not\n>> going to miss any test coverage by skipping that.\n> \n> There was one patch of mine with breakage that only manifested in the\n> pg_upgrade test *because* of its lack of no-sync. I'm afraid that this\n> change would hide certain problems.\n> https://postgr.es/m/20210130023011.n545o54j65t4kgxn@alap3.anarazel.de\n\nHmm. This talks about fsync=on being a factor counting in detecting a\nfailure with the backend. Why would the fsync done with initdb\n--sync-only on the target cluster once pg_upgrade is done change\nsomething here?\n\n> I'm not 100% comfortable with this. What can we do to preserve *some*\n> testing that include syncing? Maybe some option that a few buildfarm\n> animals use?\n\nIf you object about this part, I am fine to revert the change in\ntest.sh until there is a better facility to enforce syncs across tests\nin the buildfarm, though. I can hack something to centralize all\nthat, of course, but I am not sure when I'll be able to do so in the\nshort term. Could I keep that in MSVC's vcregress.pl at least for the\ntime being?\n--\nMichael", "msg_date": "Wed, 22 Dec 2021 16:57:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of --no-sync to pg_upgrade for test speedup" } ]
[ { "msg_contents": "This rearranges the version-dependent pieces in the new more modular style.\n\nI had originally written this before pre-9.2 support was removed and it \nhad a few more branches then. But I think it is still useful, and there \nare some pending patches that might add more branches for newer versions.", "msg_date": "Thu, 16 Dec 2021 10:56:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pg_dump: Refactor getIndexes()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This rearranges the version-dependent pieces in the new more modular style.\n> I had originally written this before pre-9.2 support was removed and it \n> had a few more branches then. But I think it is still useful, and there \n> are some pending patches that might add more branches for newer versions.\n\nI didn't double-check the details, but +1 for doing this (and similarly\nelsewhere, whenever anyone feels motivated to do so).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 10:15:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Refactor getIndexes()" } ]
[ { "msg_contents": "This code introduced in ba3e76cc571eba3dea19c9465ff15ac3ac186576 looks\nwrong to me:\n\n total_groups = cheapest_partial_path->rows *\n cheapest_partial_path->parallel_workers;\n path = (Path *)\n create_gather_merge_path(root, ordered_rel,\n path,\n path->pathtarget,\n root->sort_pathkeys, NULL,\n &total_groups);\n\nThis too:\n\n total_groups = input_path->rows *\n input_path->parallel_workers;\n\nThis came to my attention because Dave Page sent me a query plan that\nlooks like this:\n\nGather Merge (cost=22617.94..22703.35 rows=732 width=97) (actual\ntime=2561.476..2561.856 rows=879 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=21617.92..21618.83 rows=366 width=97) (actual\ntime=928.329..928.378 rows=293 loops=3)\n Sort Key: aid\n Sort Method: quicksort Memory: 155kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Parallel Seq Scan on accounts_1m (cost=0.00..21602.33\nrows=366 width=97) (actual time=74.391..74.518 rows=293 loops=3)\n Filter: (aid < 10000000)\n Rows Removed by Filter: 333040\n\nIf you look at the actual row count estimates, you see that the Gather\nMerge produces 3 times the number of rows that the Parallel Seq Scan\nproduces, which is completely correct, because the raw number is 897\nin both cases, but EXPLAIN unhelpfully divides the displayed row count\nby the loop count, which in this case is 3. If you look at the\nestimated row count, you see that the Gather Merge is estimated to\nproduce exactly 2 times the number of rows that the nodes under it\nwould produce. That's not a very good estimate, unless\nparallel_leader_participation=off, which in this case it isn't.\n\nWhat's happening here is that the actual number of rows produced by\naccounts_1m is actually 879 and is estimated as 879.\nget_parallel_divisor() decides to divide that number by 2.4, and gets\n366, because it thinks the leader will do less work than the other\nworkers. Then the code above fires and, instead of letting the\noriginal row count estimate for accounts_1m apply to the Gather Merge\npath, it overrides it with 2 * 366 = 732. This cannot be right.\nReally, I don't think it should be overriding the row count estimate\nat all. But if it is, multiplying by the number of workers can't be\nright, because the leader can also do stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Dec 2021 11:48:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "incremental sort vs. gather paths" }, { "msg_contents": "\n\nOn 12/16/21 17:48, Robert Haas wrote:\n> This code introduced in ba3e76cc571eba3dea19c9465ff15ac3ac186576 looks\n> wrong to me:\n> \n> total_groups = cheapest_partial_path->rows *\n> cheapest_partial_path->parallel_workers;\n> path = (Path *)\n> create_gather_merge_path(root, ordered_rel,\n> path,\n> path->pathtarget,\n> root->sort_pathkeys, NULL,\n> &total_groups);\n> \n> This too:\n> \n> total_groups = input_path->rows *\n> input_path->parallel_workers;\n> \n> This came to my attention because Dave Page sent me a query plan that\n> looks like this:\n> \n> Gather Merge (cost=22617.94..22703.35 rows=732 width=97) (actual\n> time=2561.476..2561.856 rows=879 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (cost=21617.92..21618.83 rows=366 width=97) (actual\n> time=928.329..928.378 rows=293 loops=3)\n> Sort Key: aid\n> Sort Method: quicksort Memory: 155kB\n> Worker 0: Sort Method: quicksort Memory: 25kB\n> Worker 1: Sort Method: quicksort Memory: 25kB\n> -> Parallel Seq Scan on accounts_1m (cost=0.00..21602.33\n> rows=366 width=97) (actual time=74.391..74.518 rows=293 loops=3)\n> Filter: (aid < 10000000)\n> Rows Removed by Filter: 333040\n> \n> If you look at the actual row count estimates, you see that the Gather\n> Merge produces 3 times the number of rows that the Parallel Seq Scan\n> produces, which is completely correct, because the raw number is 897\n> in both cases, but EXPLAIN unhelpfully divides the displayed row count\n> by the loop count, which in this case is 3. If you look at the\n> estimated row count, you see that the Gather Merge is estimated to\n> produce exactly 2 times the number of rows that the nodes under it\n> would produce. That's not a very good estimate, unless\n> parallel_leader_participation=off, which in this case it isn't.\n> \n> What's happening here is that the actual number of rows produced by\n> accounts_1m is actually 879 and is estimated as 879.\n> get_parallel_divisor() decides to divide that number by 2.4, and gets\n> 366, because it thinks the leader will do less work than the other\n> workers. Then the code above fires and, instead of letting the\n> original row count estimate for accounts_1m apply to the Gather Merge\n> path, it overrides it with 2 * 366 = 732. This cannot be right.\n> Really, I don't think it should be overriding the row count estimate\n> at all. But if it is, multiplying by the number of workers can't be\n> right, because the leader can also do stuff.\n> \n\nMaybe, but other places (predating incremental sort) creating Gather \nMerge do the same thing, and commit ba3e76cc57 merely copied this. For \nexample generate_gather_paths() does this:\n\n foreach(lc, rel->partial_pathlist)\n {\n Path *subpath = (Path *) lfirst(lc);\n GatherMergePath *path;\n\n if (subpath->pathkeys == NIL)\n continue;\n\n rows = subpath->rows * subpath->parallel_workers;\n path = create_gather_merge_path(root, rel, subpath,\n rel->reltarget,\n subpath->pathkeys, NULL, rowsp);\n add_path(rel, &path->path);\n }\n\ni.e. it's doing the same (rows * parallel_workers) calculation.\n\nIt may not not be the right way to estimate this, of course. But I'd say \nif ba3e76cc57 is doing it wrong, so are the older places.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Dec 2021 18:16:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: incremental sort vs. gather paths" }, { "msg_contents": "On Thu, Dec 16, 2021 at 12:16 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Maybe, but other places (predating incremental sort) creating Gather\n> Merge do the same thing, and commit ba3e76cc57 merely copied this. For\n> example generate_gather_paths() does this:\n>\n> foreach(lc, rel->partial_pathlist)\n> {\n> Path *subpath = (Path *) lfirst(lc);\n> GatherMergePath *path;\n>\n> if (subpath->pathkeys == NIL)\n> continue;\n>\n> rows = subpath->rows * subpath->parallel_workers;\n> path = create_gather_merge_path(root, rel, subpath,\n> rel->reltarget,\n> subpath->pathkeys, NULL, rowsp);\n> add_path(rel, &path->path);\n> }\n>\n> i.e. it's doing the same (rows * parallel_workers) calculation.\n\nUgh. I was hoping this mess wasn't my fault, but it seems that it is. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:24:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: incremental sort vs. gather paths" } ]
[ { "msg_contents": "Hi all,\r\n\r\nIn keeping with my theme of expanding the authentication/authorization\r\noptions for the server, attached is an experimental patchset that lets\r\nPostgres determine an authenticated user's allowed roles by querying an\r\nLDAP server, and enables SASL binding for those queries.\r\n\r\nThis lets you delegate pieces of pg_ident.conf to a central server, so\r\nthat you don't have to run any synchronization scripts (or deal with\r\nassociated staleness problems, repeated load on the LDAP deployment,\r\netc.). And it lets you make those queries with a client certificate\r\ninstead of a bind password, or at the very least protect your bind\r\npassword with some SCRAM crypto. You don't have to use the LDAP auth\r\nmethod for this to work; you can combine it with Kerberos or certs or\r\nany auth method that already supports pg_ident.\r\n\r\nThe target users, in my mind, are admins who are already using an auth\r\nmethod with user maps, but have many deployments and want easier\r\ncontrol over granting and revoking database access from one location.\r\nThis won't help you so much if you need to have exactly one role per\r\nuser -- there's no logic to automatically create roles, so it can't\r\nfully replace the existing synchronization scripts that are out there.\r\nBut if all you need is \"X, Y, and Z are allowed to log in as guest, and\r\nA and B may connect as admins\", then this is meant to simplify your\r\nlife.\r\n\r\nThis is a smaller step than my previous proof-of-concept, which handled\r\nfully federated authentication and authorization via an OAuth provider\r\n[1], and it should be a nice companion to my patch that adds user\r\nmappings to the LDAP auth method [2], though I haven't tried them\r\ntogether yet. (I've also been thinking about pulling group membership\r\ninformation out of Kerberos authorization data, for those of you using\r\nActive Directory. Things for later.)\r\n\r\n= How-To =\r\n\r\nIf you want to try it out -- on a non-production system please -- take\r\na look at the test suite in src/test/ldap, which has been filled out\r\nwith some example usage. The core features are the \"ldapmap\" HBA option\r\n(which you would use instead of \"map\" in your existing HBA) and the\r\n\"ldapsaslmechs\" HBA option, which you can set to a list of SASL\r\nmechanisms that you will accept. (The list of supported mechanisms is\r\ndetermined by both systems' LDAP and SASL libraries, not by Postgres.)\r\n\r\nThe tricky part is writing the pg_ident line correctly, because it's\r\ncurrently not a very good user experience. The query is in the form of\r\nan LDAP URL. It needs to return exactly one entry for the user being\r\nauthorized; the attribute values contained in that entry will be\r\ninterpreted as the list of roles that the user is allowed to connect\r\nas. Regex matching and substitution are supported as they are for\r\nregular maps. Here's a sample:\r\n\r\npg_ident.conf:\r\n\r\n myldapmap /^(.*)$ ldap://example.com/dc=example,dc=com?postgresRole?sub?(uid=\\1)\r\n\r\npg_hba.conf:\r\n\r\n hostssl all all all cert ldapmap=myldapmap ldaptls=1 ldapsaslmechs=scram-sha-1 ldapbinddn=admin ldapbindpasswd=secret\r\n\r\nThis particular setup can be described as follows:\r\n\r\n- Clients must use client certificates to authenticate to Postgres.\r\n- Once the certificate is verified, Postgres will connect to the LDAP\r\nserver at example.com, issue StartTLS, and begin a SCRAM-SHA-1 exchange\r\nusing the bind username and password (admin/secret).\r\n- Once that completes, Postgres will issue a query for the LDAP user\r\nthat has a uid matching the CN of the client certificate. (If more than\r\none user matches, authorization fails.)\r\n- The client's PGUSER will be compared with the list of postgresRole\r\nattributes belonging to that LDAP user, and if one matches,\r\nauthorization succeeds.\r\n\r\n= Areas for Improvement =\r\n\r\nI think it would be nice to support LDAP group membership in addition\r\nto object attributes.\r\n\r\nSettings for the LDAP connection are currently spread between pg_hba,\r\npg_ident, and environment variables like LDAPTLS_CERT. I made the\r\nsituation worse by allowing the pg_ident query to contain a scheme,\r\nhost, and port. That makes it seem like you could send different users\r\nto different LDAP servers, but since they would all have to share\r\nexactly the same TLS settings anyway, I think this was a mistake on my\r\npart.\r\n\r\nThat mistake aside, I think the current URL query syntax is powerful\r\nbut unintuitive. I would rather see that as an option for power users,\r\nand let other people just specify the user filter and role attribute\r\nseparately. And there needs to be more logging around the feature, to\r\nhelp debug problems.\r\n\r\nRegex substitution of user-controlled data into an LDAP query is\r\nperilous, and I don't like it. For now I have restricted the allowed\r\ncharacters as a first mitigation.\r\n\r\nIs it safe to use listen_addresses in the test suite, as I have done,\r\nas long as the HBA requires authentication? Or is that reopening a\r\nsecurity hole? I seem to recall discussion on this but my search-fu has\r\nfailed me.\r\n\r\nThere's a lot of code duplication in the current patchset that would\r\nneed to be undone.\r\n\r\n...and more; see TODOs in the patches if you're interested.\r\n\r\n= Patch Roadmap =\r\n\r\n- 0001 fixes error messages that are printed when ldap_url_parse()\r\nfails. Since the pg_ident queries use LDAP URLs, and it's easy to get\r\nthem wrong, that fix is particularly important for this patchset. But I\r\nthink it could potentially be applied separately.\r\n\r\n- 0002 implements the \"ldapmap\" HBA option and enables the ldaptls,\r\nldapbinddn, and ldapbindpasswd options for it. It also adds\r\ncorresponding tests to the LDAP suite.\r\n\r\n- 0003 tests the use of client certificates via LDAP environment\r\nvariables. (This is already supported today but I didn't see any\r\ncoverage, which will be important for the last patch.)\r\n\r\n- 0004 implements the \"ldapsaslmechs\" HBA option and adds enough SASL\r\nsupport for at least the EXTERNAL and SCRAM-* mechanisms. Others may\r\nwork but I haven't tested them. This feature is available only if you\r\nhave the <sasl/sasl.h> header on your system at build time.\r\n\r\nWDYT? (My responses here will be slower than usual. Hope you all have a\r\ngreat end to the year!)\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com\r\n[2] https://www.postgresql.org/message-id/flat/1a61806047c536e7528b943d0cfe12608118ca31.camel@vmware.com", "msg_date": "Thu, 16 Dec 2021 23:48:57 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "[PoC] Delegating pg_ident to a third party" }, { "msg_contents": "\nOn 17.12.21 00:48, Jacob Champion wrote:\n> WDYT? (My responses here will be slower than usual. Hope you all have a\n> great end to the year!)\n\nLooks interesting. I wonder whether putting this into pg_ident.conf is \nsensible. I suspect people will want to eventually add more features \naround this, like automatically creating roles or role memberships, at \nwhich point pg_ident.conf doesn't seem appropriate anymore. Should we \nhave a new file for this? Do you have any further ideas?\n\n\n", "msg_date": "Fri, 17 Dec 2021 10:06:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "On Fri, 2021-12-17 at 10:06 +0100, Peter Eisentraut wrote:\r\n> On 17.12.21 00:48, Jacob Champion wrote:\r\n> > WDYT? (My responses here will be slower than usual. Hope you all have a\r\n> > great end to the year!)\r\n> \r\n> Looks interesting. I wonder whether putting this into pg_ident.conf is \r\n> sensible. I suspect people will want to eventually add more features \r\n> around this, like automatically creating roles or role memberships, at \r\n> which point pg_ident.conf doesn't seem appropriate anymore.\r\n\r\nYeah, pg_ident is getting too cramped for this.\r\n\r\n> Should we have a new file for this? Do you have any further ideas?\r\n\r\nMy experience with these configs is mostly limited to HTTP servers.\r\nThat said, it's pretty hard to beat the flexibility of arbitrary key-\r\nvalue pairs inside nested contexts. It's nice to be able to say things\r\nlike\r\n\r\n Everyone has to use LDAP auth\r\n With this server\r\n And these TLS settings\r\n\r\n Except admins\r\n who additionally need client certificates\r\n with this CA root\r\n\r\n And Jacob\r\n who isn't allowed in anymore\r\n\r\nAre there any existing discussions along these lines that I should take\r\na look at?\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 3 Jan 2022 16:46:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Fri, 2021-12-17 at 10:06 +0100, Peter Eisentraut wrote:\n> > On 17.12.21 00:48, Jacob Champion wrote:\n> > > WDYT? (My responses here will be slower than usual. Hope you all have a\n> > > great end to the year!)\n> > \n> > Looks interesting. I wonder whether putting this into pg_ident.conf is \n> > sensible. I suspect people will want to eventually add more features \n> > around this, like automatically creating roles or role memberships, at \n> > which point pg_ident.conf doesn't seem appropriate anymore.\n\nThis is the part that I really wonder about also ... I've always viewed\npg_ident as being intended mainly for one-to-one kind of mappings and\nnot the \"map a bunch of different users into the same role\" that this\nadvocated for. Being able to have roles and memberships automatically\ncreated is much more the direction that I'd say we should be going in,\nso that in-database auditing has an actual user to go on and not some\ngeneric role that could be any number of people.\n\nI'd go a step further and suggest that the way to do this is with a\nbackground worker that's started up and connects to an LDAP\ninfrastructure and listens for changes, allowing the system to pick up\non new roles/memberships as soon as they're created in the LDAP\nenvironment. That would then be controlled by appropriate settings in\npostgresql.conf/.auto.conf.\n\n> Yeah, pg_ident is getting too cramped for this.\n\nAll that said, I do see how having the ability to call out to another\nsystem for mappings may be useful, so I'm not sure that we shouldn't\nconsider this specific change and have it be specifically just for\nmappings, in which case pg_ident seems appropriate.\n\n> > Should we have a new file for this? Do you have any further ideas?\n> \n> My experience with these configs is mostly limited to HTTP servers.\n> That said, it's pretty hard to beat the flexibility of arbitrary key-\n> value pairs inside nested contexts. It's nice to be able to say things\n> like\n> \n> Everyone has to use LDAP auth\n> With this server\n> And these TLS settings\n> \n> Except admins\n> who additionally need client certificates\n> with this CA root\n> \n> And Jacob\n> who isn't allowed in anymore\n\nI certainly don't think we should have this be limited to LDAP auth-\nsuch an external mapping ability is suitable for any authentication\nmethod that supports a mapping (thinking specifically of GSSAPI, of\ncourse..). Not sure if that's what was meant above but did want to\nmake sure that was clear. The rest looks a lot more like pg_hba or\nperhaps in-database privileges like roles/memberships existing or not\nand CONNECT rights. I'm not really sold on the idea of adding yet even\nmore different ways to control authorization.\n\nThanks,\n\nStephen", "msg_date": "Mon, 3 Jan 2022 12:36:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "On Mon, 2022-01-03 at 12:36 -0500, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > On Fri, 2021-12-17 at 10:06 +0100, Peter Eisentraut wrote:\r\n> > > On 17.12.21 00:48, Jacob Champion wrote:\r\n> > > > WDYT? (My responses here will be slower than usual. Hope you all have a\r\n> > > > great end to the year!)\r\n> > > \r\n> > > Looks interesting. I wonder whether putting this into pg_ident.conf is \r\n> > > sensible. I suspect people will want to eventually add more features \r\n> > > around this, like automatically creating roles or role memberships, at \r\n> > > which point pg_ident.conf doesn't seem appropriate anymore.\r\n> \r\n> This is the part that I really wonder about also ... I've always viewed\r\n> pg_ident as being intended mainly for one-to-one kind of mappings and\r\n> not the \"map a bunch of different users into the same role\" that this\r\n> advocated for. Being able to have roles and memberships automatically\r\n> created is much more the direction that I'd say we should be going in,\r\n> so that in-database auditing has an actual user to go on and not some\r\n> generic role that could be any number of people.\r\n\r\nThat last point was my motivation for the authn_id patch [1] -- so that\r\nauditing could see the actual user _and_ the generic role. The\r\ninformation is already there to be used, it's just not exposed to the\r\nstats framework yet.\r\n\r\nForcing one role per individual end user is wasteful and isn't really\r\nmaking good use of the role-based system that you already have.\r\nGenerally speaking, when administering hundreds or thousands of users,\r\npeople start dividing them up into groups as opposed to dealing with\r\nthem individually. So I don't think new features should be taking away\r\nflexibility in this area -- if one role per user already works well for\r\nyou, great, but don't make everyone do the same.\r\n\r\n> I'd go a step further and suggest that the way to do this is with a\r\n> background worker that's started up and connects to an LDAP\r\n> infrastructure and listens for changes, allowing the system to pick up\r\n> on new roles/memberships as soon as they're created in the LDAP\r\n> environment. That would then be controlled by appropriate settings in\r\n> postgresql.conf/.auto.conf.\r\n\r\nThis is roughly what you can already do with existing (third-party)\r\ntools, and that approach isn't scaling out in practice for some of our\r\nexisting customers. The load on the central server, for thousands of\r\nidle databases dialing in just to see if there are any new users, is\r\nhuge.\r\n\r\n> All that said, I do see how having the ability to call out to another\r\n> system for mappings may be useful, so I'm not sure that we shouldn't\r\n> consider this specific change and have it be specifically just for\r\n> mappings, in which case pg_ident seems appropriate.\r\n\r\nYeah, this PoC was mostly an increment on the functionality that\r\nalready existed. The division between what goes in pg_hba and what goes\r\nin pg_ident is starting to blur with this patchset, though, and I think\r\nPeter's point is sound.\r\n\r\n> I certainly don't think we should have this be limited to LDAP auth-\r\n> such an external mapping ability is suitable for any authentication\r\n> method that supports a mapping (thinking specifically of GSSAPI, of\r\n> course..). Not sure if that's what was meant above but did want to\r\n> make sure that was clear.\r\n\r\nYou can't use usermaps with LDAP auth yet, so no, that's not what I\r\nmeant. (I have another patch for that feature in commitfest, which\r\nwould allow these two things to be used together.)\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/E1lTwp4-0002l4-L9%40gemulon.postgresql.org\r\n", "msg_date": "Mon, 3 Jan 2022 18:29:26 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Mon, 2022-01-03 at 12:36 -0500, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > > On Fri, 2021-12-17 at 10:06 +0100, Peter Eisentraut wrote:\n> > > > On 17.12.21 00:48, Jacob Champion wrote:\n> > > > > WDYT? (My responses here will be slower than usual. Hope you all have a\n> > > > > great end to the year!)\n> > > > \n> > > > Looks interesting. I wonder whether putting this into pg_ident.conf is \n> > > > sensible. I suspect people will want to eventually add more features \n> > > > around this, like automatically creating roles or role memberships, at \n> > > > which point pg_ident.conf doesn't seem appropriate anymore.\n> > \n> > This is the part that I really wonder about also ... I've always viewed\n> > pg_ident as being intended mainly for one-to-one kind of mappings and\n> > not the \"map a bunch of different users into the same role\" that this\n> > advocated for. Being able to have roles and memberships automatically\n> > created is much more the direction that I'd say we should be going in,\n> > so that in-database auditing has an actual user to go on and not some\n> > generic role that could be any number of people.\n> \n> That last point was my motivation for the authn_id patch [1] -- so that\n> auditing could see the actual user _and_ the generic role. The\n> information is already there to be used, it's just not exposed to the\n> stats framework yet.\n\nWhile that helps, and I generally support adding that information to the\nlogs, it's certainly not nearly as good or useful as having the actual\nuser known to the database.\n\n> Forcing one role per individual end user is wasteful and isn't really\n> making good use of the role-based system that you already have.\n> Generally speaking, when administering hundreds or thousands of users,\n> people start dividing them up into groups as opposed to dealing with\n> them individually. So I don't think new features should be taking away\n> flexibility in this area -- if one role per user already works well for\n> you, great, but don't make everyone do the same.\n\nUsing the role system we have to assign privileges certainly is useful\nand sensible, of course, though I don't see where you've actually made\nan argument for why one role per individual is somehow wasteful or\nsomehow takes away from the role system that we have for granting\nrights. I'm also not suggesting that we make everyone do the same\nthing, indeed, later on I was supportive of having an external system\nprovide the mapping. Here, I'm just making the point that we should\nalso be looking at automatic role/membership creation.\n\n> > I'd go a step further and suggest that the way to do this is with a\n> > background worker that's started up and connects to an LDAP\n> > infrastructure and listens for changes, allowing the system to pick up\n> > on new roles/memberships as soon as they're created in the LDAP\n> > environment. That would then be controlled by appropriate settings in\n> > postgresql.conf/.auto.conf.\n> \n> This is roughly what you can already do with existing (third-party)\n> tools, and that approach isn't scaling out in practice for some of our\n> existing customers. The load on the central server, for thousands of\n> idle databases dialing in just to see if there are any new users, is\n> huge.\n\nIf you're referring specifically to cron-based tools which are\nconstantly hammering on the LDAP servers running the same queries over\nand over, sure, I agree that that's creating load on the LDAP\ninfrastructure (though, well, it was kind of designed to be very\nscalable for exactly that kind of load, no? So I'm not really sure why\nthat's such an issue..). That's also why I specifically wasn't\nsuggesting that and was instead suggesting that we have something that's\nconnected to one of the (hopefully, many, many) LDAP servers and is\ndoing change monitoring, allowing changes to be pushed down to PG,\nrather than cronjobs constantly running the same queries and re-checking\nthings over and over. I appreciate that that's also not free, but I\ndon't believe it's nearly as bad as the cron-based approach and it's\ncertainly something that an LDAP infrastructure should be really rather\ngood at.\n\n> > All that said, I do see how having the ability to call out to another\n> > system for mappings may be useful, so I'm not sure that we shouldn't\n> > consider this specific change and have it be specifically just for\n> > mappings, in which case pg_ident seems appropriate.\n> \n> Yeah, this PoC was mostly an increment on the functionality that\n> already existed. The division between what goes in pg_hba and what goes\n> in pg_ident is starting to blur with this patchset, though, and I think\n> Peter's point is sound.\n\nThis part I tend to disagree with- pg_ident for mappings and for ways to\ncall out to other systems to provide those mappings strikes me as\nentirely appropriate and doesn't blur the lines and that's really what\nthis patch seems to be primarily about. Peter noted that there might be\nother things we want to do and argued that those might not be\nappropriate in pg_ident, which I tend to agree with, but I don't think\nwe need to invent something entirely new for mappings when we have\npg_ident already.\n\nWhen it comes to the question of \"how to connect to an LDAP server for\n$whatever\", it seems like it'd be nice to be able to configure that once\nand reuse that configuration. Not sure I have a great suggestion for\nhow to do that. The approach this patch takes of adding options to\npg_hba for that, just like other options in pg_hba do, strikes me as\npretty reasonable. I would advocate for other methods to work when it\ncomes to authenticating to LDAP from PG though (such as GSSAPI, in\nparticular, of course...).\n\n> > I certainly don't think we should have this be limited to LDAP auth-\n> > such an external mapping ability is suitable for any authentication\n> > method that supports a mapping (thinking specifically of GSSAPI, of\n> > course..). Not sure if that's what was meant above but did want to\n> > make sure that was clear.\n> \n> You can't use usermaps with LDAP auth yet, so no, that's not what I\n> meant. (I have another patch for that feature in commitfest, which\n> would allow these two things to be used together.)\n\nYes, I'm aware of the other patch, just wanted to make sure the intent\nis for this to work for all map-supporting auth methods. Figured that\nwas the case but the examples in the prior email had me concerned and\njust wanted to make sure.\n\nThanks,\n\nStephen", "msg_date": "Mon, 3 Jan 2022 19:42:32 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "On Mon, 2022-01-03 at 19:42 -0500, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > \r\n> > That last point was my motivation for the authn_id patch [1] -- so that\r\n> > auditing could see the actual user _and_ the generic role. The\r\n> > information is already there to be used, it's just not exposed to the\r\n> > stats framework yet.\r\n> \r\n> While that helps, and I generally support adding that information to the\r\n> logs, it's certainly not nearly as good or useful as having the actual\r\n> user known to the database.\r\n\r\nCould you talk more about the use cases for which having the \"actual\r\nuser\" is better? From an auditing perspective I don't see why\r\n\"authenticated as jacob@example.net, logged in as admin\" is any worse\r\nthan \"logged in as jacob\".\r\n\r\n> > Forcing one role per individual end user is wasteful and isn't really\r\n> > making good use of the role-based system that you already have.\r\n> > Generally speaking, when administering hundreds or thousands of users,\r\n> > people start dividing them up into groups as opposed to dealing with\r\n> > them individually. So I don't think new features should be taking away\r\n> > flexibility in this area -- if one role per user already works well for\r\n> > you, great, but don't make everyone do the same.\r\n> \r\n> Using the role system we have to assign privileges certainly is useful\r\n> and sensible, of course, though I don't see where you've actually made\r\n> an argument for why one role per individual is somehow wasteful or\r\n> somehow takes away from the role system that we have for granting\r\n> rights. \r\n\r\nI was responding more to your statement that \"Being able to have roles\r\nand memberships automatically created is much more the direction that\r\nI'd say we should be going in\". It's not that one-role-per-user is\r\ninherently wasteful, but forcing role proliferation where it's not\r\nneeded is. If all users have the same set of permissions, there doesn't\r\nneed to be more than one role. But see below.\r\n\r\n> I'm also not suggesting that we make everyone do the same\r\n> thing, indeed, later on I was supportive of having an external system\r\n> provide the mapping. Here, I'm just making the point that we should\r\n> also be looking at automatic role/membership creation.\r\n\r\nGotcha. Agreed; that would open up the ability to administer role\r\nprivileges externally too, which would be cool. That could be used in\r\ntandem with something like this patchset.\r\n\r\n> > > I'd go a step further and suggest that the way to do this is with a\r\n> > > background worker that's started up and connects to an LDAP\r\n> > > infrastructure and listens for changes, allowing the system to pick up\r\n> > > on new roles/memberships as soon as they're created in the LDAP\r\n> > > environment. That would then be controlled by appropriate settings in\r\n> > > postgresql.conf/.auto.conf.\r\n> > \r\n> > This is roughly what you can already do with existing (third-party)\r\n> > tools, and that approach isn't scaling out in practice for some of our\r\n> > existing customers. The load on the central server, for thousands of\r\n> > idle databases dialing in just to see if there are any new users, is\r\n> > huge.\r\n> \r\n> If you're referring specifically to cron-based tools which are\r\n> constantly hammering on the LDAP servers running the same queries over\r\n> and over, sure, I agree that that's creating load on the LDAP\r\n> infrastructure (though, well, it was kind of designed to be very\r\n> scalable for exactly that kind of load, no? So I'm not really sure why\r\n> that's such an issue..).\r\n\r\nI don't have hands-on experience here -- just going on what I've been\r\ntold via field/product teams -- but it seems to me that there's a big\r\ndifference between asking an LDAP server to give you information on a\r\nuser at the time that user logs in, and asking it to give a list of\r\n_all_ users to every single Postgres instance you have on a regular\r\ntimer. The latter is what seems to be problematic.\r\n\r\n> That's also why I specifically wasn't\r\n> suggesting that and was instead suggesting that we have something that's\r\n> connected to one of the (hopefully, many, many) LDAP servers and is\r\n> doing change monitoring, allowing changes to be pushed down to PG,\r\n> rather than cronjobs constantly running the same queries and re-checking\r\n> things over and over. I appreciate that that's also not free, but I\r\n> don't believe it's nearly as bad as the cron-based approach and it's\r\n> certainly something that an LDAP infrastructure should be really rather\r\n> good at.\r\n\r\nI guess I'd have to see an implementation -- I was under the impression\r\nthat persistent search wasn't widely implemented?\r\n\r\n> > > All that said, I do see how having the ability to call out to another\r\n> > > system for mappings may be useful, so I'm not sure that we shouldn't\r\n> > > consider this specific change and have it be specifically just for\r\n> > > mappings, in which case pg_ident seems appropriate.\r\n> > \r\n> > Yeah, this PoC was mostly an increment on the functionality that\r\n> > already existed. The division between what goes in pg_hba and what goes\r\n> > in pg_ident is starting to blur with this patchset, though, and I think\r\n> > Peter's point is sound.\r\n> \r\n> This part I tend to disagree with- pg_ident for mappings and for ways to\r\n> call out to other systems to provide those mappings strikes me as\r\n> entirely appropriate and doesn't blur the lines and that's really what\r\n> this patch seems to be primarily about. Peter noted that there might be\r\n> other things we want to do and argued that those might not be\r\n> appropriate in pg_ident, which I tend to agree with, but I don't think\r\n> we need to invent something entirely new for mappings when we have\r\n> pg_ident already.\r\n\r\nThe current patchset here has pieces of what is usually contained in\r\nHBA (the LDAP host/port/base/filter/etc.) effectively moved into\r\npg_ident, while other pieces (TLS settings) remain in the HBA and the\r\nenvironment. That's what I'm referring to. If that is workable for you\r\nin the end, that's fine, but for me it'd be much easier to maintain if\r\nthe mapping query and the LDAP connection settings for that mapping\r\nquery were next to each other.\r\n\r\n> When it comes to the question of \"how to connect to an LDAP server for\r\n> $whatever\", it seems like it'd be nice to be able to configure that once\r\n> and reuse that configuration. Not sure I have a great suggestion for\r\n> how to do that. The approach this patch takes of adding options to\r\n> pg_hba for that, just like other options in pg_hba do, strikes me as\r\n> pretty reasonable.\r\n\r\nRight. That part seems less reasonable to me, given the current format\r\nof the HBA. YMMV.\r\n\r\n> I would advocate for other methods to work when it comes to\r\n> authenticating to LDAP from PG though (such as GSSAPI, in particular,\r\n> of course...).\r\n\r\nI can take a look at the Cyrus requirements for the GSSAPI mechanism.\r\nMight be tricky to add tests for it, though. Any others you're\r\ninterested in?\r\n\r\n> > > I certainly don't think we should have this be limited to LDAP auth-\r\n> > > such an external mapping ability is suitable for any authentication\r\n> > > method that supports a mapping (thinking specifically of GSSAPI, of\r\n> > > course..). Not sure if that's what was meant above but did want to\r\n> > > make sure that was clear.\r\n> > \r\n> > You can't use usermaps with LDAP auth yet, so no, that's not what I\r\n> > meant. (I have another patch for that feature in commitfest, which\r\n> > would allow these two things to be used together.)\r\n> \r\n> Yes, I'm aware of the other patch, just wanted to make sure the intent\r\n> is for this to work for all map-supporting auth methods. Figured that\r\n> was the case but the examples in the prior email had me concerned and\r\n> just wanted to make sure.\r\n\r\nCorrect. The new tests use cert auth, for instance.\r\n\r\nThanks,\r\n--Jacob\r\n", "msg_date": "Tue, 4 Jan 2022 23:56:02 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "Greetings,\n\nOn Tue, Jan 4, 2022 at 18:56 Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Mon, 2022-01-03 at 19:42 -0500, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > >\n> > > That last point was my motivation for the authn_id patch [1] -- so that\n> > > auditing could see the actual user _and_ the generic role. The\n> > > information is already there to be used, it's just not exposed to the\n> > > stats framework yet.\n> >\n> > While that helps, and I generally support adding that information to the\n> > logs, it's certainly not nearly as good or useful as having the actual\n> > user known to the database.\n>\n> Could you talk more about the use cases for which having the \"actual\n> user\" is better? From an auditing perspective I don't see why\n> \"authenticated as jacob@example.net, logged in as admin\" is any worse\n> than \"logged in as jacob\".\n\n\nThe above case isn’t what we are talking about, as far as I understand\nanyway. You’re suggesting “authenticated as jacob@example.net, logged in as\nsales” where the user in the database is “sales”. Consider triggers which\nonly have access to “sales”, or a tool like pgaudit which only has access\nto “sales”. Who was it in sales that updated that record though? We don’t\nknow- we would have to go try to figure it out from the logs, but even if\nwe had time stamps on the row update, there could be 50 sales people logged\nin at overlapping times.\n\n> > Forcing one role per individual end user is wasteful and isn't really\n> > > making good use of the role-based system that you already have.\n> > > Generally speaking, when administering hundreds or thousands of users,\n> > > people start dividing them up into groups as opposed to dealing with\n> > > them individually. So I don't think new features should be taking away\n> > > flexibility in this area -- if one role per user already works well for\n> > > you, great, but don't make everyone do the same.\n> >\n> > Using the role system we have to assign privileges certainly is useful\n> > and sensible, of course, though I don't see where you've actually made\n> > an argument for why one role per individual is somehow wasteful or\n> > somehow takes away from the role system that we have for granting\n> > rights.\n>\n> I was responding more to your statement that \"Being able to have roles\n> and memberships automatically created is much more the direction that\n> I'd say we should be going in\". It's not that one-role-per-user is\n> inherently wasteful, but forcing role proliferation where it's not\n> needed is. If all users have the same set of permissions, there doesn't\n> need to be more than one role. But see below.\n\n\nJust saying it’s wasteful isn’t actually saying what is wasteful about it.\n\n> I'm also not suggesting that we make everyone do the same\n> > thing, indeed, later on I was supportive of having an external system\n> > provide the mapping. Here, I'm just making the point that we should\n> > also be looking at automatic role/membership creation.\n>\n> Gotcha. Agreed; that would open up the ability to administer role\n> privileges externally too, which would be cool. That could be used in\n> tandem with something like this patchset.\n\n\nNot sure exactly what you’re referring to here by “administer role\nprivileges externally too”..? Curious to hear what you are imagining\nspecifically.\n\n> > > I'd go a step further and suggest that the way to do this is with a\n> > > > background worker that's started up and connects to an LDAP\n> > > > infrastructure and listens for changes, allowing the system to pick\n> up\n> > > > on new roles/memberships as soon as they're created in the LDAP\n> > > > environment. That would then be controlled by appropriate settings\n> in\n> > > > postgresql.conf/.auto.conf.\n> > >\n> > > This is roughly what you can already do with existing (third-party)\n> > > tools, and that approach isn't scaling out in practice for some of our\n> > > existing customers. The load on the central server, for thousands of\n> > > idle databases dialing in just to see if there are any new users, is\n> > > huge.\n> >\n> > If you're referring specifically to cron-based tools which are\n> > constantly hammering on the LDAP servers running the same queries over\n> > and over, sure, I agree that that's creating load on the LDAP\n> > infrastructure (though, well, it was kind of designed to be very\n> > scalable for exactly that kind of load, no? So I'm not really sure why\n> > that's such an issue..).\n>\n> I don't have hands-on experience here -- just going on what I've been\n> told via field/product teams -- but it seems to me that there's a big\n> difference between asking an LDAP server to give you information on a\n> user at the time that user logs in, and asking it to give a list of\n> _all_ users to every single Postgres instance you have on a regular\n> timer. The latter is what seems to be problematic.\n\n\nAnd to be clear, I agree that’s not good (though, again, really, your ldap\ninfrastructure shouldn’t be having all that much trouble with it- you can\nscale those out verryyyy far, and far more easily than a relational\ndatabase..).\n\nI’d also point out though that having to do an ldap lookup on every login\nto PG is *already* an issue in some environments, having to do multiple\namplifies that. Not to mention that when the ldap servers can’t be reached\nfor some reason, no one can log into the database and that’s rather\nunfortunate too. These are, of course, arguments for moving away from\nmethods that require checking with some other system synchronously during\nlogin- which is another reason why it’s better to have the authentication\ncredentials easily map to the PG role, without the need for external checks\nat login time. That’s done with today’s pg_ident, but this patch would\nchange that.\n\nConsider the approach I continue to advocate- GSSAPI based authentication,\nwhere a user only needs to contact the Kerberos server perhaps every 8\nhours or so for an updated ticket but otherwise can authorize directly to\nPG using their existing ticket and credentials, where their role was\npreviously created and their memberships already exist thanks to a\nbackground worker whose job it is to handle that and which deals with\ntransient network failures or other issues. In this world, most logins to\nPG don’t require any other system to be involved besides the client, the PG\nserver, and the networking between them; perhaps DNS if things aren’t\ncached on the client.\n\nOn the other hand, to use ldap authentication (which also happens to be\ndemonstrable insecure without any reasonable way to fix that), with an ldap\nmapping setup, requires two logins to an ldap server every single time a\nuser logs into PG and if the ldap environment is offline or overloaded for\nwhatever reason, the login fails or takes an excessively long amount of\ntime.\n\n> That's also why I specifically wasn't\n> > suggesting that and was instead suggesting that we have something that's\n> > connected to one of the (hopefully, many, many) LDAP servers and is\n> > doing change monitoring, allowing changes to be pushed down to PG,\n> > rather than cronjobs constantly running the same queries and re-checking\n> > things over and over. I appreciate that that's also not free, but I\n> > don't believe it's nearly as bad as the cron-based approach and it's\n> > certainly something that an LDAP infrastructure should be really rather\n> > good at.\n>\n> I guess I'd have to see an implementation -- I was under the impression\n> that persistent search wasn't widely implemented?\n\n\nI mean … let’s talk about the one that really matters here:\n\nhttps://docs.microsoft.com/en-us/windows/win32/ad/change-notifications-in-active-directory-domain-services\n\nOpenLDAP has an audit log system which can be used though it’s certainly\nnot as nice and would require code specific to it.\n\nThis talks a bit about other directories:\nhttps://docs.informatica.com/data-integration/powerexchange-adapters-for-powercenter/10-1/powerexchange-for-ldap-user-guide-for-powercenter/ldap-sessions/configuring-change-data-capture/methods-for-tracking-changes-in-different-directories.html\n\nI do wish they all supported it cleanly in the same way.\n\n> > > All that said, I do see how having the ability to call out to another\n> > > > system for mappings may be useful, so I'm not sure that we shouldn't\n> > > > consider this specific change and have it be specifically just for\n> > > > mappings, in which case pg_ident seems appropriate.\n> > >\n> > > Yeah, this PoC was mostly an increment on the functionality that\n> > > already existed. The division between what goes in pg_hba and what goes\n> > > in pg_ident is starting to blur with this patchset, though, and I think\n> > > Peter's point is sound.\n> >\n> > This part I tend to disagree with- pg_ident for mappings and for ways to\n> > call out to other systems to provide those mappings strikes me as\n> > entirely appropriate and doesn't blur the lines and that's really what\n> > this patch seems to be primarily about. Peter noted that there might be\n> > other things we want to do and argued that those might not be\n> > appropriate in pg_ident, which I tend to agree with, but I don't think\n> > we need to invent something entirely new for mappings when we have\n> > pg_ident already.\n>\n> The current patchset here has pieces of what is usually contained in\n> HBA (the LDAP host/port/base/filter/etc.) effectively moved into\n> pg_ident, while other pieces (TLS settings) remain in the HBA and the\n> environment. That's what I'm referring to. If that is workable for you\n> in the end, that's fine, but for me it'd be much easier to maintain if\n> the mapping query and the LDAP connection settings for that mapping\n> query were next to each other.\n\n\nI can agree with the point that it would be nicer to have the ldap\nhost/port/base/filter be in the hba instead, if there is a way to\naccomplish that reasonably. Did you have a suggestion in mind for how to do\nthat..? If there’s an alternative approach to consider, it’d be useful to\nsee them next to each other and then we could all contemplate which is\nbetter.\n\n> When it comes to the question of \"how to connect to an LDAP server for\n> > $whatever\", it seems like it'd be nice to be able to configure that once\n> > and reuse that configuration. Not sure I have a great suggestion for\n> > how to do that. The approach this patch takes of adding options to\n> > pg_hba for that, just like other options in pg_hba do, strikes me as\n> > pretty reasonable.\n>\n> Right. That part seems less reasonable to me, given the current format\n> of the HBA. YMMV.\n\n\nIf the ldap connection info and filters and such could all exist in the\nhba, then perhaps a way to define those credentials in one place in the hba\nfile and then use them on other lines would be possible..? Seems like that\nwould be easier than having them also in the ident or having the ident\nrefer to something defined elsewhere.\n\nConsider in the hba having:\n\nLDAPSERVER[ldap1]=“ldaps://whatever other options go here”\n\nThen later:\n\nhostssl all all ::0/0 ldap ldapserver=ldap1 ldapmapserver=ldap1\nmap=myldapmap\n\nClearly needs more thought needed due to different requirements for ldap\nauthentication vs. the map, but still, the general idea being to have all\nof it in the hba and then a way to define ldap server configuration in the\nhba once and then reused.\n\n> I would advocate for other methods to work when it comes to\n> > authenticating to LDAP from PG though (such as GSSAPI, in particular,\n> > of course...).\n>\n> I can take a look at the Cyrus requirements for the GSSAPI mechanism.\n> Might be tricky to add tests for it, though. Any others you're\n> interested in?\n\n\nGSSAPI is the main one … I suppose client side certificates would be nice\ntoo if that’s possible. I suspect some would like a way to have\nusername/pw ldap credentials in some other file besides the hba, but that\nisn’t as interesting to me, at least.\n\n> > > I certainly don't think we should have this be limited to LDAP auth-\n> > > > such an external mapping ability is suitable for any authentication\n> > > > method that supports a mapping (thinking specifically of GSSAPI, of\n> > > > course..). Not sure if that's what was meant above but did want to\n> > > > make sure that was clear.\n> > >\n> > > You can't use usermaps with LDAP auth yet, so no, that's not what I\n> > > meant. (I have another patch for that feature in commitfest, which\n> > > would allow these two things to be used together.)\n> >\n> > Yes, I'm aware of the other patch, just wanted to make sure the intent\n> > is for this to work for all map-supporting auth methods. Figured that\n> > was the case but the examples in the prior email had me concerned and\n> > just wanted to make sure.\n>\n> Correct. The new tests use cert auth, for instance.\n\n\nGreat.\n\nThanks!\n\nStephen\n\nGreetings,On Tue, Jan 4, 2022 at 18:56 Jacob Champion <pchampion@vmware.com> wrote:On Mon, 2022-01-03 at 19:42 -0500, Stephen Frost wrote:\n> * Jacob Champion (pchampion@vmware.com) wrote:\n> > \n> > That last point was my motivation for the authn_id patch [1] -- so that\n> > auditing could see the actual user _and_ the generic role. The\n> > information is already there to be used, it's just not exposed to the\n> > stats framework yet.\n> \n> While that helps, and I generally support adding that information to the\n> logs, it's certainly not nearly as good or useful as having the actual\n> user known to the database.\n\nCould you talk more about the use cases for which having the \"actual\nuser\" is better? From an auditing perspective I don't see why\n\"authenticated as jacob@example.net, logged in as admin\" is any worse\nthan \"logged in as jacob\".The above case isn’t what we are talking about, as far as I understand anyway. You’re suggesting “authenticated as jacob@example.net, logged in as sales” where the user in the database is “sales”.  Consider triggers which only have access to “sales”, or a tool like pgaudit which only has access to “sales”.  Who was it in sales that updated that record though?  We don’t know- we would have to go try to figure it out from the logs, but even if we had time stamps on the row update, there could be 50 sales people logged in at overlapping times.\n> > Forcing one role per individual end user is wasteful and isn't really\n> > making good use of the role-based system that you already have.\n> > Generally speaking, when administering hundreds or thousands of users,\n> > people start dividing them up into groups as opposed to dealing with\n> > them individually. So I don't think new features should be taking away\n> > flexibility in this area -- if one role per user already works well for\n> > you, great, but don't make everyone do the same.\n> \n> Using the role system we have to assign privileges certainly is useful\n> and sensible, of course, though I don't see where you've actually made\n> an argument for why one role per individual is somehow wasteful or\n> somehow takes away from the role system that we have for granting\n> rights. \n\nI was responding more to your statement that \"Being able to have roles\nand memberships automatically created is much more the direction that\nI'd say we should be going in\". It's not that one-role-per-user is\ninherently wasteful, but forcing role proliferation where it's not\nneeded is. If all users have the same set of permissions, there doesn't\nneed to be more than one role. But see below.Just saying it’s wasteful isn’t actually saying what is wasteful about it.\n> I'm also not suggesting that we make everyone do the same\n> thing, indeed, later on I was supportive of having an external system\n> provide the mapping.  Here, I'm just making the point that we should\n> also be looking at automatic role/membership creation.\n\nGotcha. Agreed; that would open up the ability to administer role\nprivileges externally too, which would be cool. That could be used in\ntandem with something like this patchset.Not sure exactly what you’re referring to here by “administer role privileges externally too”..?  Curious to hear what you are imagining specifically.\n> > > I'd go a step further and suggest that the way to do this is with a\n> > > background worker that's started up and connects to an LDAP\n> > > infrastructure and listens for changes, allowing the system to pick up\n> > > on new roles/memberships as soon as they're created in the LDAP\n> > > environment.  That would then be controlled by appropriate settings in\n> > > postgresql.conf/.auto.conf.\n> > \n> > This is roughly what you can already do with existing (third-party)\n> > tools, and that approach isn't scaling out in practice for some of our\n> > existing customers. The load on the central server, for thousands of\n> > idle databases dialing in just to see if there are any new users, is\n> > huge.\n> \n> If you're referring specifically to cron-based tools which are\n> constantly hammering on the LDAP servers running the same queries over\n> and over, sure, I agree that that's creating load on the LDAP\n> infrastructure (though, well, it was kind of designed to be very\n> scalable for exactly that kind of load, no?  So I'm not really sure why\n> that's such an issue..).\n\nI don't have hands-on experience here -- just going on what I've been\ntold via field/product teams -- but it seems to me that there's a big\ndifference between asking an LDAP server to give you information on a\nuser at the time that user logs in, and asking it to give a list of\n_all_ users to every single Postgres instance you have on a regular\ntimer. The latter is what seems to be problematic.And to be clear, I agree that’s not good (though, again, really, your ldap infrastructure shouldn’t be having all that much trouble with it- you can scale those out verryyyy far, and far more easily than a relational database..).I’d also point out though that having to do an ldap lookup on every login to PG is *already* an issue in some environments, having to do multiple amplifies that.  Not to mention that when the ldap servers can’t be reached for some reason, no one can log into the database and that’s rather unfortunate too. These are, of course, arguments for moving away from methods that require checking with some other system synchronously during login- which is another reason why it’s better to have the authentication credentials easily map to the PG role, without the need for external checks at login time.  That’s done with today’s pg_ident, but this patch would change that.Consider the approach I continue to advocate- GSSAPI based authentication, where a user only needs to contact the Kerberos server perhaps every 8 hours or so for an updated ticket but otherwise can authorize directly to PG using their existing ticket and credentials, where their role was previously created and their memberships already exist thanks to a background worker whose job it is to handle that and which deals with transient network failures or other issues. In this world, most logins to PG don’t require any other system to be involved besides the client, the PG server, and the networking between them; perhaps DNS if things aren’t cached on the client. On the other hand, to use ldap authentication (which also happens to be demonstrable insecure without any reasonable way to fix that), with an ldap mapping setup, requires two logins to an ldap server every single time a user logs into PG and if the ldap environment is offline or overloaded for whatever reason, the login fails or takes an excessively long amount of time.\n> That's also why I specifically wasn't\n> suggesting that and was instead suggesting that we have something that's\n> connected to one of the (hopefully, many, many) LDAP servers and is\n> doing change monitoring, allowing changes to be pushed down to PG,\n> rather than cronjobs constantly running the same queries and re-checking\n> things over and over.  I appreciate that that's also not free, but I\n> don't believe it's nearly as bad as the cron-based approach and it's\n> certainly something that an LDAP infrastructure should be really rather\n> good at.\n\nI guess I'd have to see an implementation -- I was under the impression\nthat persistent search wasn't widely implemented?I mean … let’s talk about the one that really matters here: https://docs.microsoft.com/en-us/windows/win32/ad/change-notifications-in-active-directory-domain-servicesOpenLDAP has an audit log system which can be used though it’s certainly not as nice and would require code specific to it.This talks a bit about other directories: https://docs.informatica.com/data-integration/powerexchange-adapters-for-powercenter/10-1/powerexchange-for-ldap-user-guide-for-powercenter/ldap-sessions/configuring-change-data-capture/methods-for-tracking-changes-in-different-directories.htmlI do wish they all supported it cleanly in the same way.\n> > > All that said, I do see how having the ability to call out to another\n> > > system for mappings may be useful, so I'm not sure that we shouldn't\n> > > consider this specific change and have it be specifically just for\n> > > mappings, in which case pg_ident seems appropriate.\n> > \n> > Yeah, this PoC was mostly an increment on the functionality that\n> > already existed. The division between what goes in pg_hba and what goes\n> > in pg_ident is starting to blur with this patchset, though, and I think\n> > Peter's point is sound.\n> \n> This part I tend to disagree with- pg_ident for mappings and for ways to\n> call out to other systems to provide those mappings strikes me as\n> entirely appropriate and doesn't blur the lines and that's really what\n> this patch seems to be primarily about.  Peter noted that there might be\n> other things we want to do and argued that those might not be\n> appropriate in pg_ident, which I tend to agree with, but I don't think\n> we need to invent something entirely new for mappings when we have\n> pg_ident already.\n\nThe current patchset here has pieces of what is usually contained in\nHBA (the LDAP host/port/base/filter/etc.) effectively moved into\npg_ident, while other pieces (TLS settings) remain in the HBA and the\nenvironment. That's what I'm referring to. If that is workable for you\nin the end, that's fine, but for me it'd be much easier to maintain if\nthe mapping query and the LDAP connection settings for that mapping\nquery were next to each other.I can agree with the point that it would be nicer to have the ldap host/port/base/filter be in the hba instead, if there is a way to accomplish that reasonably. Did you have a suggestion in mind for how to do that..?  If there’s an alternative approach to consider, it’d be useful to see them next to each other and then we could all contemplate which is better.\n> When it comes to the question of \"how to connect to an LDAP server for\n> $whatever\", it seems like it'd be nice to be able to configure that once\n> and reuse that configuration.  Not sure I have a great suggestion for\n> how to do that. The approach this patch takes of adding options to\n> pg_hba for that, just like other options in pg_hba do, strikes me as\n> pretty reasonable.\n\nRight. That part seems less reasonable to me, given the current format\nof the HBA. YMMV.If the ldap connection info and filters and such could all exist in the hba, then perhaps a way to define those credentials in one place in the hba file and then use them on other lines would be possible..?  Seems like that would be easier than having them also in the ident or having the ident refer to something defined elsewhere. Consider in the hba having:LDAPSERVER[ldap1]=“ldaps://whatever other options go here”Then later:hostssl all all ::0/0 ldap ldapserver=ldap1 ldapmapserver=ldap1 map=myldapmapClearly needs more thought needed due to different requirements for ldap authentication vs. the map, but still, the general idea being to have all of it in the hba and then a way to define ldap server configuration in the hba once and then reused.\n> I would advocate for other methods to work when it comes to\n> authenticating to LDAP from PG though (such as GSSAPI, in particular,\n> of course...).\n\nI can take a look at the Cyrus requirements for the GSSAPI mechanism.\nMight be tricky to add tests for it, though. Any others you're\ninterested in?GSSAPI is the main one … I suppose client side certificates would be nice too if that’s possible.  I suspect some would like a way to have username/pw ldap credentials in some other file besides the hba, but that isn’t as interesting to me, at least.\n> > > I certainly don't think we should have this be limited to LDAP auth-\n> > > such an external mapping ability is suitable for any authentication\n> > > method that supports a mapping (thinking specifically of GSSAPI, of\n> > > course..).  Not sure if that's what was meant above but did want to\n> > > make sure that was clear.\n> > \n> > You can't use usermaps with LDAP auth yet, so no, that's not what I\n> > meant. (I have another patch for that feature in commitfest, which\n> > would allow these two things to be used together.)\n> \n> Yes, I'm aware of the other patch, just wanted to make sure the intent\n> is for this to work for all map-supporting auth methods.  Figured that\n> was the case but the examples in the prior email had me concerned and\n> just wanted to make sure.\n\nCorrect. The new tests use cert auth, for instance.Great.Thanks!Stephen", "msg_date": "Tue, 4 Jan 2022 22:24:58 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "On Tue, 2022-01-04 at 22:24 -0500, Stephen Frost wrote:\r\n> On Tue, Jan 4, 2022 at 18:56 Jacob Champion <pchampion@vmware.com> wrote:\r\n> > \r\n> > Could you talk more about the use cases for which having the \"actual\r\n> > user\" is better? From an auditing perspective I don't see why\r\n> > \"authenticated as jacob@example.net, logged in as admin\" is any worse\r\n> > than \"logged in as jacob\".\r\n> \r\n> The above case isn’t what we are talking about, as far as I\r\n> understand anyway. You’re suggesting “authenticated as \r\n> jacob@example.net, logged in as sales” where the user in the database\r\n> is “sales”. Consider triggers which only have access to “sales”, or\r\n> a tool like pgaudit which only has access to “sales”.\r\n\r\nOkay. So an additional getter function in miscadmin.h, and surfacing\r\nthat function to trigger languages, are needed to make authn_id more\r\ngenerally useful. Any other cases you can think of?\r\n\r\n> > I was responding more to your statement that \"Being able to have roles\r\n> > and memberships automatically created is much more the direction that\r\n> > I'd say we should be going in\". It's not that one-role-per-user is\r\n> > inherently wasteful, but forcing role proliferation where it's not\r\n> > needed is. If all users have the same set of permissions, there doesn't\r\n> > need to be more than one role. But see below.\r\n> \r\n> Just saying it’s wasteful isn’t actually saying what is wasteful about it.\r\n\r\nWell, I felt like it was irrelevant; you've already said you have no\r\nintention to force one-user-per-role.\r\n\r\nBut to elaborate: *forcing* one-user-per-role is wasteful, because if I\r\nhave a thousand employees, and I want to give all my employees access\r\nto a guest role in the database, then I have to administer a thousand\r\nroles: maintaining them through dump/restores and pg_upgrades, auditing\r\nthem to figure out why Bob in Accounting somehow got a different\r\nprivilege GRANT than the rest of the users, adding new accounts,\r\npurging old ones, maintaining the inevitable scripts that will result.\r\n\r\nIf none of the users need to be \"special\" in any way, that's all wasted\r\noverhead. (If they do actually need to be special, then at least some\r\nof that overhead becomes necessary. Otherwise it's waste.) You may be\r\nable to mitigate the cost of the waste, or absorb the mitigations into\r\nPostgres so that the user can't see the waste, or decide that the waste\r\nis not costly enough to care about. It's still waste.\r\n\r\n> > > I'm also not suggesting that we make everyone do the same\r\n> > > thing, indeed, later on I was supportive of having an external system\r\n> > > provide the mapping. Here, I'm just making the point that we should\r\n> > > also be looking at automatic role/membership creation.\r\n> > \r\n> > Gotcha. Agreed; that would open up the ability to administer role\r\n> > privileges externally too, which would be cool. That could be used in\r\n> > tandem with something like this patchset.\r\n> \r\n> Not sure exactly what you’re referring to here by “administer role\r\n> privileges externally too”..? Curious to hear what you are imagining\r\n> specifically.\r\n\r\nJust that it would be nice to centrally provision role GRANTs as well\r\nas role membership, that's all. No specifics in mind, and I'm not even\r\nsure if LDAP would be a helpful place to put that sort of config.\r\n\r\n> I’d also point out though that having to do an ldap lookup on every\r\n> login to PG is *already* an issue in some environments, having to do\r\n> multiple amplifies that.\r\n\r\nYou can't use the LDAP auth method with this patch yet, so this concern\r\nis based on code that doesn't exist. It's entirely possible that you\r\ncould do the role query as part of the first bound connection. If that\r\nproves unworkable, then yes, I agree that it's a concern.\r\n\r\n> Not to mention that when the ldap servers can’t be reached for some\r\n> reason, no one can log into the database and that’s rather\r\n> unfortunate too.\r\n\r\nAssuming you have no caches, then yes. That might be a pretty good\r\nargument for allowing ldapmap and map to be used together, actually, so\r\nthat you can have some critical users who can always log in as\r\n\"themselves\" or \"admin\" or etc. Or maybe it's an argument for allowing\r\nHBA to handle fallback methods of authentication.\r\n\r\nLuckily I think it's pretty easy to communicate to LDAP users that if\r\n*all* your login infrastructure goes down, you will no longer be able\r\nto log in. They're probably used to that idea, if they haven't set up\r\nany availability infra.\r\n\r\n> These are, of course, arguments for moving away from methods that\r\n> require checking with some other system synchronously during login-\r\n> which is another reason why it’s better to have the authentication\r\n> credentials easily map to the PG role, without the need for external\r\n> checks at login time. That’s done with today’s pg_ident, but this\r\n> patch would change that.\r\n\r\nThere are arguments for moving towards synchronous checks as well.\r\nCentral revocation of credentials (in timeframes shorter than ticket\r\nexpiration) is what comes to mind. Revocation is hard and usually\r\nconflicts with the desire for availability.\r\n\r\nWhat's \"better\" for me or you is not necessarily \"better\" overall; it's\r\nall tradeoffs, all the time.\r\n\r\n> Consider the approach I continue to advocate- GSSAPI based\r\n> authentication, where a user only needs to contact the Kerberos\r\n> server perhaps every 8 hours or so for an updated ticket but\r\n> otherwise can authorize directly to PG using their existing ticket\r\n> and credentials, where their role was previously created and their\r\n> memberships already exist thanks to a background worker whose job it\r\n> is to handle that and which deals with transient network failures or\r\n> other issues. In this world, most logins to PG don’t require any\r\n> other system to be involved besides the client, the PG server, and\r\n> the networking between them; perhaps DNS if things aren’t cached on \r\n> the client.\r\n> \r\n> On the other hand, to use ldap authentication (which also happens to\r\n> be demonstrable insecure without any reasonable way to fix that),\r\n> with an ldap mapping setup, requires two logins to an ldap server\r\n> every single time a user logs into PG and if the ldap environment is\r\n> offline or overloaded for whatever reason, the login fails or takes\r\n> an excessively long amount of time.\r\n\r\nThe two systems have different architectures, and different security\r\nproperties, and you have me at a disadvantage in that you can see the\r\nexperimental code I have written and I cannot see the hypothetical code\r\nin your head.\r\n\r\nIt sounds like I'm more concerned with the ability to have an online\r\ncentral source of truth for access control, accepting that denial of\r\nservice may cause the system to fail shut; and you're more concerned\r\nwith availability in the face of network failure, accepting that denial\r\nof service may cause the system to fail open. I think that's a design\r\ndecision that belongs to an end user.\r\n\r\nThe distributed availability problems you're describing are, in my\r\nexperience, typically solved by caching. With your not-yet-written\r\nsolution, the caching is built into Postgres, and it's on all of the\r\ntime, but may (see below) only actually perform well with Active\r\nDirectory. With my solution, any caching is optional, because it has to\r\nbe implemented/maintained external to Postgres, but because it's just\r\ngeneric \"LDAP caching\" then it should be broadly compatible and we\r\ndon't have to maintain it. I can see arguments for and against both\r\napproaches.\r\n\r\n> > I guess I'd have to see an implementation -- I was under the impression\r\n> > that persistent search wasn't widely implemented?\r\n> \r\n> I mean … let’s talk about the one that really matters here: \r\n> \r\n> https://docs.microsoft.com/en-us/windows/win32/ad/change-notifications-in-active-directory-domain-services\r\n\r\nThat would certainly be a useful thing to implement for deployments\r\nthat can use it. But my personal interest in writing \"LDAP\" code that\r\nonly works with AD is nil, at least in the short term.\r\n\r\n(The continued attitude that Microsoft Active Directory is \"the one\r\nthat really matters\" is really frustrating. I have users on LDAP\r\nwithout Active Directory. Postgres tests are written against OpenLDAP.)\r\n\r\n> OpenLDAP has an audit log system which can be used though it’s\r\n> certainly not as nice and would require code specific to it.\r\n> \r\n> This talks a bit about other directories: \r\n> https://docs.informatica.com/data-integration/powerexchange-adapters-for-powercenter/10-1/powerexchange-for-ldap-user-guide-for-powercenter/ldap-sessions/configuring-change-data-capture/methods-for-tracking-changes-in-different-directories.html\r\n> \r\n> I do wish they all supported it cleanly in the same way.\r\n\r\nOkay. But the answer to \"is persistent search widely implemented?\"\r\nappears to be \"No.\"\r\n\r\n> > The current patchset here has pieces of what is usually contained in\r\n> > HBA (the LDAP host/port/base/filter/etc.) effectively moved into\r\n> > pg_ident, while other pieces (TLS settings) remain in the HBA and the\r\n> > environment. That's what I'm referring to. If that is workable for you\r\n> > in the end, that's fine, but for me it'd be much easier to maintain if\r\n> > the mapping query and the LDAP connection settings for that mapping\r\n> > query were next to each other.\r\n> \r\n> I can agree with the point that it would be nicer to have the ldap\r\n> host/port/base/filter be in the hba instead, if there is a way to\r\n> accomplish that reasonably. Did you have a suggestion in mind for how\r\n> to do that..? If there’s an alternative approach to consider, it’d\r\n> be useful to see them next to each other and then we could all\r\n> contemplate which is better.\r\n\r\nI didn't say I necessarily wanted it all in the HBA, just that I wanted\r\nit all in the same spot.\r\n\r\nI don't see a good way to push the filter back into the HBA, because it\r\nmay very well depend on the users being mapped (i.e. there may need to\r\nbe multiple lines in the map). Same for the query attributes. In fact\r\nif I'm already using AD Kerberos or SSPI and I want to be able to\r\nhandle users coming from multiple domains, couldn't I be querying\r\nentirely different servers depending on the username presented?\r\n\r\n> > > When it comes to the question of \"how to connect to an LDAP server for\r\n> > > $whatever\", it seems like it'd be nice to be able to configure that once\r\n> > > and reuse that configuration. Not sure I have a great suggestion for\r\n> > > how to do that. The approach this patch takes of adding options to\r\n> > > pg_hba for that, just like other options in pg_hba do, strikes me as\r\n> > > pretty reasonable.\r\n> > \r\n> > Right. That part seems less reasonable to me, given the current format\r\n> > of the HBA. YMMV.\r\n> \r\n> If the ldap connection info and filters and such could all exist in\r\n> the hba, then perhaps a way to define those credentials in one place\r\n> in the hba file and then use them on other lines would be\r\n> possible..? Seems like that would be easier than having them also in\r\n> the ident or having the ident refer to something defined elsewhere. \r\n> \r\n> Consider in the hba having:\r\n> \r\n> LDAPSERVER[ldap1]=“ldaps://whatever other options go here”\r\n> \r\n> Then later:\r\n> \r\n> hostssl all all ::0/0 ldap ldapserver=ldap1 ldapmapserver=ldap1 map=myldapmap\r\n> \r\n> Clearly needs more thought needed due to different requirements for\r\n> ldap authentication vs. the map, but still, the general idea being to\r\n> have all of it in the hba and then a way to define ldap server\r\n> configuration in the hba once and then reused.\r\n\r\nYou're open to the idea of bolting a new key/value grammar onto the HBA\r\nparser, but not to the idea of brainstorming a different configuration\r\nDSL?\r\n\r\n> > I can take a look at the Cyrus requirements for the GSSAPI mechanism.\r\n> > Might be tricky to add tests for it, though. Any others you're\r\n> > interested in?\r\n> \r\n> GSSAPI is the main one … I suppose client side certificates would be\r\n> nice too if that’s possible. I suspect some would like a way to have\r\n> username/pw ldap credentials in some other file besides the hba, but\r\n> that isn’t as interesting to me, at least.\r\n\r\nCertificate auth is already there in the patch. See the end of\r\nt/001_ldap.t.\r\n\r\nThanks,\r\n--Jacob\r\n\r\n\r\n", "msg_date": "Sat, 8 Jan 2022 00:32:58 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Tue, 2022-01-04 at 22:24 -0500, Stephen Frost wrote:\n> > On Tue, Jan 4, 2022 at 18:56 Jacob Champion <pchampion@vmware.com> wrote:\n> > > \n> > > Could you talk more about the use cases for which having the \"actual\n> > > user\" is better? From an auditing perspective I don't see why\n> > > \"authenticated as jacob@example.net, logged in as admin\" is any worse\n> > > than \"logged in as jacob\".\n> > \n> > The above case isn’t what we are talking about, as far as I\n> > understand anyway. You’re suggesting “authenticated as \n> > jacob@example.net, logged in as sales” where the user in the database\n> > is “sales”. Consider triggers which only have access to “sales”, or\n> > a tool like pgaudit which only has access to “sales”.\n> \n> Okay. So an additional getter function in miscadmin.h, and surfacing\n> that function to trigger languages, are needed to make authn_id more\n> generally useful. Any other cases you can think of?\n\nThat would help but now you've got two different things that have to be\ntracked, potentially, because for some people you might not want to use\ntheir system auth'd-as ID. I don't see that as a great solution and\ninstead as a workaround. Yes, we should also do this but it's really an\nargument for how to deal with such a setup, not a justification for\ngoing down this route.\n\n> > > I was responding more to your statement that \"Being able to have roles\n> > > and memberships automatically created is much more the direction that\n> > > I'd say we should be going in\". It's not that one-role-per-user is\n> > > inherently wasteful, but forcing role proliferation where it's not\n> > > needed is. If all users have the same set of permissions, there doesn't\n> > > need to be more than one role. But see below.\n> > \n> > Just saying it’s wasteful isn’t actually saying what is wasteful about it.\n> \n> Well, I felt like it was irrelevant; you've already said you have no\n> intention to force one-user-per-role.\n\nForcing one-user-per-role would be breaking things we already support\nso, no, I certainly don't have any intention of requiring such a change.\nThat said, I do feel it's useful to have these discussions.\n\n> But to elaborate: *forcing* one-user-per-role is wasteful, because if I\n> have a thousand employees, and I want to give all my employees access\n> to a guest role in the database, then I have to administer a thousand\n> roles: maintaining them through dump/restores and pg_upgrades, auditing\n> them to figure out why Bob in Accounting somehow got a different\n> privilege GRANT than the rest of the users, adding new accounts,\n> purging old ones, maintaining the inevitable scripts that will result.\n\npg_upgrade just handles it, no? pg_dumpall -g does too. Having to deal\nwith roles in general is a pain but the number of them isn't necessarily\nan issue. A guest role which doesn't have any auditing requirements\nmight be a decent use-case for what you're talking about here but I\ndon't know that we'd implement this for just that case. Part of this\ndiscussion was specifically about addressing the other challenges- like\nhaving automation around the account addition/removal and sync'ing role\nmembership too. As for auditing privileges, that should be done\nregardless and the case you outline isn't somehow different from others\n(the same could be as easily said for how the 'guest' account got access\nto whatever it did).\n\n> If none of the users need to be \"special\" in any way, that's all wasted\n> overhead. (If they do actually need to be special, then at least some\n> of that overhead becomes necessary. Otherwise it's waste.) You may be\n> able to mitigate the cost of the waste, or absorb the mitigations into\n> Postgres so that the user can't see the waste, or decide that the waste\n> is not costly enough to care about. It's still waste.\n\nExcept the amount of 'wasted' overhead being claimed here seems to be\nhardly any. The biggest complaint levied at this seems to really be\njust the issues around the load on the ldap systems from having to deal\nwith the frequent sync queries, and that's largely a solvable issue in\nthe majority of environments out there today.\n\n> > > > I'm also not suggesting that we make everyone do the same\n> > > > thing, indeed, later on I was supportive of having an external system\n> > > > provide the mapping. Here, I'm just making the point that we should\n> > > > also be looking at automatic role/membership creation.\n> > > \n> > > Gotcha. Agreed; that would open up the ability to administer role\n> > > privileges externally too, which would be cool. That could be used in\n> > > tandem with something like this patchset.\n> > \n> > Not sure exactly what you’re referring to here by “administer role\n> > privileges externally too”..? Curious to hear what you are imagining\n> > specifically.\n> \n> Just that it would be nice to centrally provision role GRANTs as well\n> as role membership, that's all. No specifics in mind, and I'm not even\n> sure if LDAP would be a helpful place to put that sort of config.\n\nGRANT's on objects, you mean? I agree, that would be interesting to\nconsider though it would involve custom entries in the LDAP directory,\nno? Role membership would be able to be sync'd as part of group\nmembership and that was something I was thinking would be handled as\npart of this in a similar manner to what the 3rd party solutions provide\ntoday using the cron-based approach.\n\n> > I’d also point out though that having to do an ldap lookup on every\n> > login to PG is *already* an issue in some environments, having to do\n> > multiple amplifies that.\n> \n> You can't use the LDAP auth method with this patch yet, so this concern\n> is based on code that doesn't exist. It's entirely possible that you\n> could do the role query as part of the first bound connection. If that\n> proves unworkable, then yes, I agree that it's a concern.\n\nPerhaps it could be done as part of the same connection but that then\nhas an impact on what the configuration of the ident LDAP lookup would\nbe, no? That seems like an important thing to flesh out before we move\ntoo much farther with this patch, to make sure that, if we want that to\nwork, that there's a clear way to configure it to avoid the double LDAP\nconnection. I'm guessing you already have an idea how that'll work\nthough..?\n\n> > Not to mention that when the ldap servers can’t be reached for some\n> > reason, no one can log into the database and that’s rather\n> > unfortunate too.\n> \n> Assuming you have no caches, then yes. That might be a pretty good\n> argument for allowing ldapmap and map to be used together, actually, so\n> that you can have some critical users who can always log in as\n> \"themselves\" or \"admin\" or etc. Or maybe it's an argument for allowing\n> HBA to handle fallback methods of authentication.\n\nOk, so now we're talking about a cache that needs to be implemented\nwhich will ... store the user's password for LDAP authentication? Or\nwhat the mapping is for various LDAP IDs to PG roles? And how will that\ncache be managed? Would it be handled by dump/restore? What about\npg_upgrade? How will entries in the cache be removed?\n\nAnd mainly- how is this different from just having all the roles in PG\nto begin with..?\n\n> Luckily I think it's pretty easy to communicate to LDAP users that if\n> *all* your login infrastructure goes down, you will no longer be able\n> to log in. They're probably used to that idea, if they haven't set up\n> any availability infra.\n\nExcept that most of the rest of the infrastructure may continue to work\njust fine except for logging in- which is something most folks only do\nonce a day. That is, why is the SQL Server system still happily\naccepting connections while the AD is being rebooted? Or why can I\nstill log into the company website even though AD is down, but I can't\nget into PG? Not everything in an environment is tied to LDAP being up\nand running all the time, so it's not nearly so cut and dry in many,\nmany cases.\n\n> > These are, of course, arguments for moving away from methods that\n> > require checking with some other system synchronously during login-\n> > which is another reason why it’s better to have the authentication\n> > credentials easily map to the PG role, without the need for external\n> > checks at login time. That’s done with today’s pg_ident, but this\n> > patch would change that.\n> \n> There are arguments for moving towards synchronous checks as well.\n> Central revocation of credentials (in timeframes shorter than ticket\n> expiration) is what comes to mind. Revocation is hard and usually\n> conflicts with the desire for availability.\n\nRevocation in less time than ticket lifetime and everything falling over\ndue to the AD being restarted are very different. The approaches being\ndiscussed are all much shorter than ticket lifetime and so that's hardly\nan appropriate comparison to be making. I didn't suggest that waiting\nfor ticket expiration would be appropriate when it comes to syncing\naccounts between AD and PG or that it would be appropriate for\nrevocation. Regarding the cache'ing proposed above- in such a case,\nclearly, revocation wouldn't be syncronous either. Certainly in the\ncases today where cronjobs are being used to perform the sync,\nrevocation also isn't syncronous (unless also using LDAP for\nauthentication, of course, though that wouldn't do anything for existing\nsessions, while removing role memberships does...).\n\n> What's \"better\" for me or you is not necessarily \"better\" overall; it's\n> all tradeoffs, all the time.\n\nSure.\n\n> > Consider the approach I continue to advocate- GSSAPI based\n> > authentication, where a user only needs to contact the Kerberos\n> > server perhaps every 8 hours or so for an updated ticket but\n> > otherwise can authorize directly to PG using their existing ticket\n> > and credentials, where their role was previously created and their\n> > memberships already exist thanks to a background worker whose job it\n> > is to handle that and which deals with transient network failures or\n> > other issues. In this world, most logins to PG don’t require any\n> > other system to be involved besides the client, the PG server, and\n> > the networking between them; perhaps DNS if things aren’t cached on \n> > the client.\n> > \n> > On the other hand, to use ldap authentication (which also happens to\n> > be demonstrable insecure without any reasonable way to fix that),\n> > with an ldap mapping setup, requires two logins to an ldap server\n> > every single time a user logs into PG and if the ldap environment is\n> > offline or overloaded for whatever reason, the login fails or takes\n> > an excessively long amount of time.\n> \n> The two systems have different architectures, and different security\n> properties, and you have me at a disadvantage in that you can see the\n> experimental code I have written and I cannot see the hypothetical code\n> in your head.\n\nI've barely glanced at the code you've written and it largely hasn't\nbeen driving my comments on this thread- merely the understanding of how\nit works. Further, you've stated that you're already familiar with\nsystems that sync between LDAP and PG and the vast majority of this\ndiscussion has been about that distinction- if we push the mappings into\nPG as roles, or if we execute a query out to LDAP on connection to check\nthe mapping. The above references to tickets and GSSAPI/Kerberos are\nall from existing code as well. The only reference to hypothetical code\nis the idea of a background or other worker that subscribes to changes\nin LDAP and implements those changes in PG instead of having something\ncron-based do it, but that doesn't really change anything about the\narchitectural question of if we cache (either with an explicit cache, as\nyou've opined us adding above, though which there is no code for today,\nor just by using PG's existing role/membership system) or call out to\nLDAP for every login.\n\n> It sounds like I'm more concerned with the ability to have an online\n> central source of truth for access control, accepting that denial of\n> service may cause the system to fail shut; and you're more concerned\n> with availability in the face of network failure, accepting that denial\n> of service may cause the system to fail open. I think that's a design\n> decision that belongs to an end user.\n\nThere is more to it than just failing shut/closed. Part of the argument\nbeing used to drive this change was that it would help to reduce the\nload on the LDAP servers because there wouldn't be a need to run large\nqueries on them frequently out of cron to keep PG's understanding of\nwhat the roles are and their mappings is matching what's in LDAP. \n\n> The distributed availability problems you're describing are, in my\n> experience, typically solved by caching. With your not-yet-written\n> solution, the caching is built into Postgres, and it's on all of the\n> time, but may (see below) only actually perform well with Active\n> Directory. With my solution, any caching is optional, because it has to\n> be implemented/maintained external to Postgres, but because it's just\n> generic \"LDAP caching\" then it should be broadly compatible and we\n> don't have to maintain it. I can see arguments for and against both\n> approaches.\n\nI'm a bit confused by the this- either you're referring to the cache\nbeing PG's existing system, which certainly has already been written,\nand has existed since it was committed and released as part of 8.1, and\nis, indeed, on all the time ... or you're talking about something else\nwhich hasn't been written and could therefore be anything, though I'm\ngenerally against the idea of having an independent cache for this, as\ndescribed above.\n\nAs for optional cacheing with some generic LDAP caching system, that\nstrikes me as clearly even worse than building something into PG for\nthis as it requires maintaining yet another system in order to have a\nreasonably well working system and that isn't good. While it's good\nthat we have pgbouncer, it'd certainly be better if we didn't need it\nand it's got a bunch of downsides to it. I strongly suspect the same\nwould be true of some external generic \"LDAP cacheing\" system as is\nreferred to above, though as there isn't anything to look at, I can't\nsay for sure.\n\nRegarding 'performing well', while lots of little queries may be better\nin some cases than less frequent larger queries, that's really going to\ndepend on the frequency of each and therefore really be rather dependent\non the environment and usage. In any case, however, being able to\nleverage change modifications instead of fully resyncing will definitely\nbe better. At the same time, however, if we have the external generic\nLDAP cacheing system that's being claimed ... why wouldn't we simply use\nthat with the cron-based system today to offload those from the main\nLDAP systems?\n\n> > > I guess I'd have to see an implementation -- I was under the impression\n> > > that persistent search wasn't widely implemented?\n> > \n> > I mean … let’s talk about the one that really matters here: \n> > \n> > https://docs.microsoft.com/en-us/windows/win32/ad/change-notifications-in-active-directory-domain-services\n> \n> That would certainly be a useful thing to implement for deployments\n> that can use it. But my personal interest in writing \"LDAP\" code that\n> only works with AD is nil, at least in the short term.\n> \n> (The continued attitude that Microsoft Active Directory is \"the one\n> that really matters\" is really frustrating. I have users on LDAP\n> without Active Directory. Postgres tests are written against OpenLDAP.)\n\nWhat would you consider the important directories to worry about beyond\nAD? I don't consider the PG testing framework to be particularly\nindicative of what enterprises are actually running.\n\n> > OpenLDAP has an audit log system which can be used though it’s\n> > certainly not as nice and would require code specific to it.\n> > \n> > This talks a bit about other directories: \n> > https://docs.informatica.com/data-integration/powerexchange-adapters-for-powercenter/10-1/powerexchange-for-ldap-user-guide-for-powercenter/ldap-sessions/configuring-change-data-capture/methods-for-tracking-changes-in-different-directories.html\n> > \n> > I do wish they all supported it cleanly in the same way.\n> \n> Okay. But the answer to \"is persistent search widely implemented?\"\n> appears to be \"No.\"\n\nI'm curious as to how the large environments that you've worked with\nhave generally solved this issue. Is there a generic LDAP cacheing\nsystem that's been used? What?\n\n> > > The current patchset here has pieces of what is usually contained in\n> > > HBA (the LDAP host/port/base/filter/etc.) effectively moved into\n> > > pg_ident, while other pieces (TLS settings) remain in the HBA and the\n> > > environment. That's what I'm referring to. If that is workable for you\n> > > in the end, that's fine, but for me it'd be much easier to maintain if\n> > > the mapping query and the LDAP connection settings for that mapping\n> > > query were next to each other.\n> > \n> > I can agree with the point that it would be nicer to have the ldap\n> > host/port/base/filter be in the hba instead, if there is a way to\n> > accomplish that reasonably. Did you have a suggestion in mind for how\n> > to do that..? If there’s an alternative approach to consider, it’d\n> > be useful to see them next to each other and then we could all\n> > contemplate which is better.\n> \n> I didn't say I necessarily wanted it all in the HBA, just that I wanted\n> it all in the same spot.\n> \n> I don't see a good way to push the filter back into the HBA, because it\n> may very well depend on the users being mapped (i.e. there may need to\n> be multiple lines in the map). Same for the query attributes. In fact\n> if I'm already using AD Kerberos or SSPI and I want to be able to\n> handle users coming from multiple domains, couldn't I be querying\n> entirely different servers depending on the username presented?\n\nYeah, that's a good point and which argues for putting everything into\nthe ident. In such a situation as you describe above, we wouldn't\nactually have any LDAP configuration in the HBA and I'm entirely fine\nwith that- we'd just have it all in ident. I don't see how you'd make\nthat work with, as you suggest above, LDAP-based authentication and the\nidea of having only one connection be used for the LDAP-based auth and\nthe mapping lookup, but I'm also not generally worried about LDAP-based\nauth and would rather we rip it out entirely. :)\n\nAs such, I'd say that you've largely convinced me that we should just\nmove all of the LDAP configuration for the lookup into the ident and\ndiscourage people from using LDAP-based authentication and from putting\nLDAP configuration into the hba. I'm still a fan of the general idea of\nhaving a way to configure such ldap parameters in one place in whatever\nfile they go into and then re-using that multiple times on the general\nassumption that folks are likely to need to reference a particular LDAP\nconfiguration more than once, wherever it's configured.\n\n> > > > When it comes to the question of \"how to connect to an LDAP server for\n> > > > $whatever\", it seems like it'd be nice to be able to configure that once\n> > > > and reuse that configuration. Not sure I have a great suggestion for\n> > > > how to do that. The approach this patch takes of adding options to\n> > > > pg_hba for that, just like other options in pg_hba do, strikes me as\n> > > > pretty reasonable.\n> > > \n> > > Right. That part seems less reasonable to me, given the current format\n> > > of the HBA. YMMV.\n> > \n> > If the ldap connection info and filters and such could all exist in\n> > the hba, then perhaps a way to define those credentials in one place\n> > in the hba file and then use them on other lines would be\n> > possible..? Seems like that would be easier than having them also in\n> > the ident or having the ident refer to something defined elsewhere. \n> > \n> > Consider in the hba having:\n> > \n> > LDAPSERVER[ldap1]=“ldaps://whatever other options go here”\n> > \n> > Then later:\n> > \n> > hostssl all all ::0/0 ldap ldapserver=ldap1 ldapmapserver=ldap1 map=myldapmap\n> > \n> > Clearly needs more thought needed due to different requirements for\n> > ldap authentication vs. the map, but still, the general idea being to\n> > have all of it in the hba and then a way to define ldap server\n> > configuration in the hba once and then reused.\n> \n> You're open to the idea of bolting a new key/value grammar onto the HBA\n> parser, but not to the idea of brainstorming a different configuration\n> DSL?\n\nShort answer- yes (or, as mentioned just above, into the ident file vs.\nthe hba). I'd rather we build on the existing configuration systems\nthat we have rather than invent something new that will then have to\nwork with the others, as I don't see it as likely that we could just\nreplace the existing ones with something new and make everyone\nchange. Having yet another one strikes me as worse than making\nimprovements to the existing ones (be those 'bolted on' or otherwise).\n\nThanks,\n\nStephen", "msg_date": "Mon, 10 Jan 2022 15:09:32 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" }, { "msg_contents": "On Mon, 2022-01-10 at 15:09 -0500, Stephen Frost wrote:\r\n> Greetings,\r\n\r\nSorry for the delay, the last few weeks have been insane.\r\n\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > On Tue, 2022-01-04 at 22:24 -0500, Stephen Frost wrote:\r\n> > > On Tue, Jan 4, 2022 at 18:56 Jacob Champion <pchampion@vmware.com> wrote:\r\n> > > > Could you talk more about the use cases for which having the \"actual\r\n> > > > user\" is better? From an auditing perspective I don't see why\r\n> > > > \"authenticated as jacob@example.net, logged in as admin\" is any worse\r\n> > > > than \"logged in as jacob\".\r\n> > > \r\n> > > The above case isn’t what we are talking about, as far as I\r\n> > > understand anyway. You’re suggesting “authenticated as \r\n> > > jacob@example.net, logged in as sales” where the user in the database\r\n> > > is “sales”. Consider triggers which only have access to “sales”, or\r\n> > > a tool like pgaudit which only has access to “sales”.\r\n> > \r\n> > Okay. So an additional getter function in miscadmin.h, and surfacing\r\n> > that function to trigger languages, are needed to make authn_id more\r\n> > generally useful. Any other cases you can think of?\r\n> \r\n> That would help but now you've got two different things that have to be\r\n> tracked, potentially, because for some people you might not want to use\r\n> their system auth'd-as ID. I don't see that as a great solution and\r\n> instead as a workaround.\r\n\r\nThere's nothing to be worked around. If you have a user mapping set up\r\nusing the features that exist today, and you want to audit who logged\r\nin at some point in the past, then you need to log both the\r\nauthenticated ID and the authorized role. There's no getting around\r\nthat. It's not enough to say \"just check the configuration\" because the\r\nconfig can change over time.\r\n\r\n> > But to elaborate: *forcing* one-user-per-role is wasteful, because if I\r\n> > have a thousand employees, and I want to give all my employees access\r\n> > to a guest role in the database, then I have to administer a thousand\r\n> > roles: maintaining them through dump/restores and pg_upgrades, auditing\r\n> > them to figure out why Bob in Accounting somehow got a different\r\n> > privilege GRANT than the rest of the users, adding new accounts,\r\n> > purging old ones, maintaining the inevitable scripts that will result.\r\n> \r\n> pg_upgrade just handles it, no? pg_dumpall -g does too. Having to deal\r\n> with roles in general is a pain but the number of them isn't necessarily\r\n> an issue. A guest role which doesn't have any auditing requirements\r\n> might be a decent use-case for what you're talking about here but I\r\n> don't know that we'd implement this for just that case. Part of this\r\n> discussion was specifically about addressing the other challenges- like\r\n> having automation around the account addition/removal and sync'ing role\r\n> membership too. As for auditing privileges, that should be done\r\n> regardless and the case you outline isn't somehow different from others\r\n> (the same could be as easily said for how the 'guest' account got access\r\n> to whatever it did).\r\n\r\nI think there's a difference between auditing a small fixed number of\r\nroles and auditing many thousands of them that change on a weekly or\r\ndaily basis. I'd rather maintain the former, given the choice. It's\r\nharder for things to slip through the cracks with fewer moving pieces.\r\n\r\n> > If none of the users need to be \"special\" in any way, that's all wasted\r\n> > overhead. (If they do actually need to be special, then at least some\r\n> > of that overhead becomes necessary. Otherwise it's waste.) You may be\r\n> > able to mitigate the cost of the waste, or absorb the mitigations into\r\n> > Postgres so that the user can't see the waste, or decide that the waste\r\n> > is not costly enough to care about. It's still waste.\r\n> \r\n> Except the amount of 'wasted' overhead being claimed here seems to be\r\n> hardly any. The biggest complaint levied at this seems to really be\r\n> just the issues around the load on the ldap systems from having to deal\r\n> with the frequent sync queries, and that's largely a solvable issue in\r\n> the majority of environments out there today.\r\n\r\nAs long as we're in agreement that there is waste, I don't think I'm\r\ngoing to convince you about the cost. It's tangential anyway if you're\r\nnot going to remove many-to-many maps.\r\n\r\n> > > Not sure exactly what you’re referring to here by “administer role\r\n> > > privileges externally too”..? Curious to hear what you are imagining\r\n> > > specifically.\r\n> > \r\n> > Just that it would be nice to centrally provision role GRANTs as well\r\n> > as role membership, that's all. No specifics in mind, and I'm not even\r\n> > sure if LDAP would be a helpful place to put that sort of config.\r\n> \r\n> GRANT's on objects, you mean? I agree, that would be interesting to\r\n> consider though it would involve custom entries in the LDAP directory,\r\n> no? Role membership would be able to be sync'd as part of group\r\n> membership and that was something I was thinking would be handled as\r\n> part of this in a similar manner to what the 3rd party solutions provide\r\n> today using the cron-based approach.\r\n\r\nAgreed. I haven't put too much thought into those use cases yet.\r\n\r\n> > > I’d also point out though that having to do an ldap lookup on every\r\n> > > login to PG is *already* an issue in some environments, having to do\r\n> > > multiple amplifies that.\r\n> > \r\n> > You can't use the LDAP auth method with this patch yet, so this concern\r\n> > is based on code that doesn't exist. It's entirely possible that you\r\n> > could do the role query as part of the first bound connection. If that\r\n> > proves unworkable, then yes, I agree that it's a concern.\r\n> \r\n> Perhaps it could be done as part of the same connection but that then\r\n> has an impact on what the configuration of the ident LDAP lookup would\r\n> be, no? That seems like an important thing to flesh out before we move\r\n> too much farther with this patch, to make sure that, if we want that to\r\n> work, that there's a clear way to configure it to avoid the double LDAP\r\n> connection. I'm guessing you already have an idea how that'll work\r\n> though..?\r\n\r\nIt's only relevant if the other thread (which you've said you're\r\nignoring) progresses. The patch discussed here does not touch that code\r\npath.\r\n\r\nBut yes, I have a general idea that as long as a user can look up (but\r\nnot modify) their own role information, this should work just fine.\r\n\r\n> > > Not to mention that when the ldap servers can’t be reached for some\r\n> > > reason, no one can log into the database and that’s rather\r\n> > > unfortunate too.\r\n> > \r\n> > Assuming you have no caches, then yes. That might be a pretty good\r\n> > argument for allowing ldapmap and map to be used together, actually, so\r\n> > that you can have some critical users who can always log in as\r\n> > \"themselves\" or \"admin\" or etc. Or maybe it's an argument for allowing\r\n> > HBA to handle fallback methods of authentication.\r\n> \r\n> Ok, so now we're talking about a cache that needs to be implemented\r\n> which will ... store the user's password for LDAP authentication? Or\r\n> what the mapping is for various LDAP IDs to PG roles? And how will that\r\n> cache be managed? Would it be handled by dump/restore? What about\r\n> pg_upgrade? How will entries in the cache be removed?\r\n\r\nYou keep pulling the authentication discussion, which this patch does\r\nnot touch on purpose, into this discussion about authorization. The\r\nauthz info requested by this patch seems like it can be cached.\r\n\r\nPeople currently using LDAP authentication (which again, this patch\r\ncannot use because there is no LDAP user mapping) either have existing\r\nHA infrastructure that they're happy with, or they don't. This patch\r\nshouldn't make that situation any better or worse -- *if* the lookup\r\ncan be done on one connection.\r\n\r\n> And mainly- how is this different from just having all the roles in PG\r\n> to begin with..?\r\n\r\nThis comment seems counterproductive. One major difference is that\r\nPostgres doesn't have to duplicate the authentication info that some\r\nother system already holds.\r\n\r\n> > Luckily I think it's pretty easy to communicate to LDAP users that if\r\n> > *all* your login infrastructure goes down, you will no longer be able\r\n> > to log in. They're probably used to that idea, if they haven't set up\r\n> > any availability infra.\r\n> \r\n> Except that most of the rest of the infrastructure may continue to work\r\n> just fine except for logging in- which is something most folks only do\r\n> once a day. That is, why is the SQL Server system still happily\r\n> accepting connections while the AD is being rebooted? Or why can I\r\n> still log into the company website even though AD is down, but I can't\r\n> get into PG? Not everything in an environment is tied to LDAP being up\r\n> and running all the time, so it's not nearly so cut and dry in many,\r\n> many cases.\r\n\r\nWhatever LDAP users currently deal with, this patch doesn't change\r\ntheir experience, right? It seems like it's a lot easier to add caching\r\nto a synchronous check, to make it asynchronous and a little more\r\nfault-tolerant, than it is to do the reverse.\r\n\r\n> > > These are, of course, arguments for moving away from methods that\r\n> > > require checking with some other system synchronously during login-\r\n> > > which is another reason why it’s better to have the authentication\r\n> > > credentials easily map to the PG role, without the need for external\r\n> > > checks at login time. That’s done with today’s pg_ident, but this\r\n> > > patch would change that.\r\n> > \r\n> > There are arguments for moving towards synchronous checks as well.\r\n> > Central revocation of credentials (in timeframes shorter than ticket\r\n> > expiration) is what comes to mind. Revocation is hard and usually\r\n> > conflicts with the desire for availability.\r\n> \r\n> Revocation in less time than ticket lifetime and everything falling over\r\n> due to the AD being restarted are very different. The approaches being\r\n> discussed are all much shorter than ticket lifetime and so that's hardly\r\n> an appropriate comparison to be making. I didn't suggest that waiting\r\n> for ticket expiration would be appropriate when it comes to syncing\r\n> accounts between AD and PG or that it would be appropriate for\r\n> revocation. Regarding the cache'ing proposed above- in such a case,\r\n> clearly, revocation wouldn't be syncronous either. Certainly in the\r\n> cases today where cronjobs are being used to perform the sync,\r\n> revocation also isn't syncronous (unless also using LDAP for\r\n> authentication, of course, though that wouldn't do anything for existing\r\n> sessions, while removing role memberships does...).\r\n\r\nSure. Again: tradeoffs.\r\n\r\n> > The two systems have different architectures, and different security\r\n> > properties, and you have me at a disadvantage in that you can see the\r\n> > experimental code I have written and I cannot see the hypothetical code\r\n> > in your head.\r\n> \r\n> I've barely glanced at the code you've written <snip>\r\n\r\nThis is frustrating to read. I think we're talking past each other,\r\nbecause I'm trying to talk about this patch and you're talking about\r\nother things.\r\n\r\n> The only reference to hypothetical code\r\n> is the idea of a background or other worker that subscribes to changes\r\n> in LDAP and implements those changes in PG instead of having something\r\n> cron-based do it\r\n\r\nYes. That's what I was referring to.\r\n\r\n> , but that doesn't really change anything about the\r\n> architectural question of if we cache (either with an explicit cache, as\r\n> you've opined us adding above, though which there is no code for today,\r\n\r\nLDAP caches exist... I'm not suggesting we implement a Postgres-branded \r\nLDAP cache.\r\n\r\n> or just by using PG's existing role/membership system) or call out to\r\n> LDAP for every login.\r\n> \r\n> > It sounds like I'm more concerned with the ability to have an online\r\n> > central source of truth for access control, accepting that denial of\r\n> > service may cause the system to fail shut; and you're more concerned\r\n> > with availability in the face of network failure, accepting that denial\r\n> > of service may cause the system to fail open. I think that's a design\r\n> > decision that belongs to an end user.\r\n> \r\n> There is more to it than just failing shut/closed. Part of the argument\r\n> being used to drive this change was that it would help to reduce the\r\n> load on the LDAP servers because there wouldn't be a need to run large\r\n> queries on them frequently out of cron to keep PG's understanding of\r\n> what the roles are and their mappings is matching what's in LDAP.\r\n\r\nYes.\r\n\r\n> > The distributed availability problems you're describing are, in my\r\n> > experience, typically solved by caching. With your not-yet-written\r\n> > solution, the caching is built into Postgres, and it's on all of the\r\n> > time, but may (see below) only actually perform well with Active\r\n> > Directory. With my solution, any caching is optional, because it has to\r\n> > be implemented/maintained external to Postgres, but because it's just\r\n> > generic \"LDAP caching\" then it should be broadly compatible and we\r\n> > don't have to maintain it. I can see arguments for and against both\r\n> > approaches.\r\n> \r\n> I'm a bit confused by the this- either you're referring to the cache\r\n> being PG's existing system, which certainly has already been written,\r\n> and has existed since it was committed and released as part of 8.1, and\r\n> is, indeed, on all the time ... or you're talking about something else\r\n> which hasn't been written and could therefore be anything, though I'm\r\n> generally against the idea of having an independent cache for this, as\r\n> described above.\r\n\r\nYou just proposed an internal caching system, immediately upthread:\r\n\"I'd go a step further and suggest that the way to do this is with a\r\nbackground worker that's started up and connects to an LDAP\r\ninfrastructure and listens for changes, allowing the system to pick up\r\non new roles/memberships as soon as they're created in the LDAP\r\nenvironment.\" That proposal is what I was referring to by \"your not-\r\nyet-written solution\".\r\n\r\n> As for optional cacheing with some generic LDAP caching system, that\r\n> strikes me as clearly even worse than building something into PG for\r\n> this as it requires maintaining yet another system in order to have a\r\n> reasonably well working system and that isn't good.\r\n\r\nA choice for the end user. If they don't want to deal with LDAP\r\ninfrastructure, they don't have to use it.\r\n\r\n> While it's good\r\n> that we have pgbouncer, it'd certainly be better if we didn't need it\r\n> and it's got a bunch of downsides to it. I strongly suspect the same\r\n> would be true of some external generic \"LDAP cacheing\" system as is\r\n> referred to above, though as there isn't anything to look at, I can't\r\n> say for sure.\r\n\r\nWe can take a look at OpenLDAP's proxy caching for some info. That\r\nwon't be perfectly representative but I don't think there's \"nothing to\r\nlook at\".\r\n\r\n> Regarding 'performing well', while lots of little queries may be better\r\n> in some cases than less frequent larger queries, that's really going to\r\n> depend on the frequency of each and therefore really be rather dependent\r\n> on the environment and usage. In any case, however, being able to\r\n> leverage change modifications instead of fully resyncing will definitely\r\n> be better. At the same time, however, if we have the external generic\r\n> LDAP cacheing system that's being claimed ... why wouldn't we simply use\r\n> that with the cron-based system today to offload those from the main\r\n> LDAP systems?\r\n\r\nI think there's an architectural difference between a proxy cache that\r\nis set up to reduce load on a central server, and one that is set up to\r\nhandle network partitions while ensuring liveness. To be fair, I don't\r\nknow which use cases existing solutions can handle. But those two don't\r\nseem to be the same to me.\r\n\r\nI know that I have users who are okay with the query load from logins,\r\nbut not with the query load of their role-sync scripts. That's a good\r\nenough datapoint for me.\r\n\r\n> > That would certainly be a useful thing to implement for deployments\r\n> > that can use it. But my personal interest in writing \"LDAP\" code that\r\n> > only works with AD is nil, at least in the short term.\r\n> > \r\n> > (The continued attitude that Microsoft Active Directory is \"the one\r\n> > that really matters\" is really frustrating. I have users on LDAP\r\n> > without Active Directory. Postgres tests are written against OpenLDAP.)\r\n> \r\n> What would you consider the important directories to worry about beyond\r\n> AD? I don't consider the PG testing framework to be particularly\r\n> indicative of what enterprises are actually running.\r\n\r\nI have end users running\r\n- NetIQ/Novell eDirectory\r\n- Oracle Directory Server\r\n- Red Hat IdM\r\nin addition to AD.\r\n\r\n> > > OpenLDAP has an audit log system which can be used though it’s\r\n> > > certainly not as nice and would require code specific to it.\r\n> > > \r\n> > > This talks a bit about other directories: \r\n> > > https://docs.informatica.com/data-integration/powerexchange-adapters-for-powercenter/10-1/powerexchange-for-ldap-user-guide-for-powercenter/ldap-sessions/configuring-change-data-capture/methods-for-tracking-changes-in-different-directories.html\r\n> > > \r\n> > > I do wish they all supported it cleanly in the same way.\r\n> > \r\n> > Okay. But the answer to \"is persistent search widely implemented?\"\r\n> > appears to be \"No.\"\r\n> \r\n> I'm curious as to how the large environments that you've worked with\r\n> have generally solved this issue. Is there a generic LDAP cacheing\r\n> system that's been used? What?\r\n\r\nThey haven't solved the issue; that's why I'm poking at it. Several\r\nusers have to cobble together scripts because of poor interaction with\r\ntheir existing LDAP deployments (or complete lack of support, in the\r\ncase of pgbouncer).\r\n\r\n> > I don't see a good way to push the filter back into the HBA, because it\r\n> > may very well depend on the users being mapped (i.e. there may need to\r\n> > be multiple lines in the map). Same for the query attributes. In fact\r\n> > if I'm already using AD Kerberos or SSPI and I want to be able to\r\n> > handle users coming from multiple domains, couldn't I be querying\r\n> > entirely different servers depending on the username presented?\r\n> \r\n> Yeah, that's a good point and which argues for putting everything into\r\n> the ident. In such a situation as you describe above, we wouldn't\r\n> actually have any LDAP configuration in the HBA and I'm entirely fine\r\n> with that- we'd just have it all in ident. I don't see how you'd make\r\n> that work with, as you suggest above, LDAP-based authentication and the\r\n> idea of having only one connection be used for the LDAP-based auth and\r\n> the mapping lookup, but I'm also not generally worried about LDAP-based\r\n> auth and would rather we rip it out entirely. :)\r\n> \r\n> As such, I'd say that you've largely convinced me that we should just\r\n> move all of the LDAP configuration for the lookup into the ident and\r\n> discourage people from using LDAP-based authentication and from putting\r\n> LDAP configuration into the hba. \r\n\r\nI'm willing to bet that Postgres dropping support will not result in my\r\nend users abandoning their LDAP infrastructure. Either I and others in\r\nmy position will need to maintain forks, or my end users will find a\r\ndifferent database.\r\n\r\nIf there's widespread agreement that the project doesn't want to\r\nmaintain an LDAP auth method -- so far I think you've provided the only\r\nsuch opinion, that I've seen at least -- that might be a good argument\r\nfor introducing pluggable auth so that the community can maintain the\r\nmethods that are important to them.\r\n\r\n> I'm still a fan of the general idea of\r\n> having a way to configure such ldap parameters in one place in whatever\r\n> file they go into and then re-using that multiple times on the general\r\n> assumption that folks are likely to need to reference a particular LDAP\r\n> configuration more than once, wherever it's configured.\r\n\r\nSure.\r\n\r\n> > You're open to the idea of bolting a new key/value grammar onto the HBA\r\n> > parser, but not to the idea of brainstorming a different configuration\r\n> > DSL?\r\n> \r\n> Short answer- yes (or, as mentioned just above, into the ident file vs.\r\n> the hba). I'd rather we build on the existing configuration systems\r\n> that we have rather than invent something new that will then have to\r\n> work with the others, as I don't see it as likely that we could just\r\n> replace the existing ones with something new and make everyone\r\n> change. Having yet another one strikes me as worse than making\r\n> improvements to the existing ones (be those 'bolted on' or otherwise).\r\n\r\nI think the key to maintaining incrementally built systems is that at\r\nsome point, eventually, you refactor the thing. There was a brief\r\nquestion on what that might look like, from Peter. You stepped in with\r\nsome very strong opinions.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 2 Feb 2022 19:45:04 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] Delegating pg_ident to a third party" } ]
[ { "msg_contents": "Hello hackers,\n\nWe cannot use ORDER BY or LIMIT/OFFSET in the current\nDELETE statement syntax, so all the row matching the\nWHERE condition are deleted. However, the tuple retrieving\nprocess of DELETE is basically same as SELECT statement,\nso I think that we can also allow DELETE to use ORDER BY\nand LIMIT/OFFSET.\n\nAttached is the concept patch. This enables the following\noperations:\n\n================================================================\npostgres=# select * from t order by i;\n i \n----\n 1\n 2\n 2\n 2\n 2\n 5\n 10\n 20\n 33\n 35\n 53\n(11 rows)\n\npostgres=# delete from t where i = 2 limit 2;\nDELETE 2\npostgres=# select * from t order by i;\n i \n----\n 1\n 2\n 2\n 5\n 10\n 20\n 33\n 35\n 53\n(9 rows)\n\npostgres=# delete from t order by i offset 3 limit 3;\nDELETE 3\npostgres=# select * from t order by i;\n i \n----\n 1\n 2\n 2\n 33\n 35\n 53\n(6 rows)\n================================================================\n\nAlthough we can do the similar operations using ctid and a subquery\nsuch as\n\n DELETE FROM t WHERE ctid IN (SELECT ctid FROM t WHERE ... ORDER BY ... LIMIT ...),\n\nit is more user friendly and intuitive to allow it in the DELETE syntax\nbecause ctid is a system column and most users may not be familiar with it.\n\nAlthough this is not allowed in the SQL standard, it is supported\nin MySQL[1]. DB2 also supports it although the syntax is somewhat\nstrange.[2]\n\nAlso, here seem to be some use cases. For example, \n- when you want to delete the specified number of rows from a table\n that doesn't have a primary key and contains tuple duplicated.\n- when you want to delete the bottom 10 items with bad scores\n (without using rank() window function).\n- when you want to delete only some of rows because it takes time\n to delete all of them.\n\n[1] https://dev.mysql.com/doc/refman/8.0/en/delete.html\n[2] https://www.dba-db2.com/2015/04/delete-first-1000-rows-in-a-db2-table-using-fetch-first.html\n\nHow do you think it?\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 17 Dec 2021 09:47:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "On Fri, 17 Dec 2021 09:47:18 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello hackers,\n> \n> We cannot use ORDER BY or LIMIT/OFFSET in the current\n> DELETE statement syntax, so all the row matching the\n> WHERE condition are deleted. However, the tuple retrieving\n> process of DELETE is basically same as SELECT statement,\n> so I think that we can also allow DELETE to use ORDER BY\n> and LIMIT/OFFSET.\n> \n> Attached is the concept patch. This enables the following\n> operations:\n\nAfter post this, I noticed that there are several similar\nproposals in past:\n\nhttps://www.postgresql.org/message-id/flat/AANLkTi%3D6fBZh9yZT7f7kKh%2BzmQngAyHgZWBPM3eiEMj1%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/1393112801.59251.YahooMailNeo%40web163006.mail.bf1.yahoo.com\nhttps://www.postgresql.org/message-id/flat/CADB9FDf-Vh6RnKAMZ4Rrg_YP9p3THdPbji8qe4qkxRuiOwm%3Dmg%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CALAY4q9fcrscybax7fg_uojFwjw_Wg0UMuSrf-FvN68SeSAPAA%40mail.gmail.com\n\nAnyway, I'll review these threads before progressing it.\n\n> \n> ================================================================\n> postgres=# select * from t order by i;\n> i \n> ----\n> 1\n> 2\n> 2\n> 2\n> 2\n> 5\n> 10\n> 20\n> 33\n> 35\n> 53\n> (11 rows)\n> \n> postgres=# delete from t where i = 2 limit 2;\n> DELETE 2\n> postgres=# select * from t order by i;\n> i \n> ----\n> 1\n> 2\n> 2\n> 5\n> 10\n> 20\n> 33\n> 35\n> 53\n> (9 rows)\n> \n> postgres=# delete from t order by i offset 3 limit 3;\n> DELETE 3\n> postgres=# select * from t order by i;\n> i \n> ----\n> 1\n> 2\n> 2\n> 33\n> 35\n> 53\n> (6 rows)\n> ================================================================\n> \n> Although we can do the similar operations using ctid and a subquery\n> such as\n> \n> DELETE FROM t WHERE ctid IN (SELECT ctid FROM t WHERE ... ORDER BY ... LIMIT ...),\n> \n> it is more user friendly and intuitive to allow it in the DELETE syntax\n> because ctid is a system column and most users may not be familiar with it.\n> \n> Although this is not allowed in the SQL standard, it is supported\n> in MySQL[1]. DB2 also supports it although the syntax is somewhat\n> strange.[2]\n> \n> Also, here seem to be some use cases. For example, \n> - when you want to delete the specified number of rows from a table\n> that doesn't have a primary key and contains tuple duplicated.\n> - when you want to delete the bottom 10 items with bad scores\n> (without using rank() window function).\n> - when you want to delete only some of rows because it takes time\n> to delete all of them.\n> \n> [1] https://dev.mysql.com/doc/refman/8.0/en/delete.html\n> [2] https://www.dba-db2.com/2015/04/delete-first-1000-rows-in-a-db2-table-using-fetch-first.html\n> \n> How do you think it?\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 17 Dec 2021 10:50:56 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> We cannot use ORDER BY or LIMIT/OFFSET in the current\n> DELETE statement syntax, so all the row matching the\n> WHERE condition are deleted. However, the tuple retrieving\n> process of DELETE is basically same as SELECT statement,\n> so I think that we can also allow DELETE to use ORDER BY\n> and LIMIT/OFFSET.\n\nIndeed, this is technically possible, but we've rejected the idea\nbefore and I'm not aware of any reason to change our minds.\nThe problem is that a partial DELETE is not very deterministic\nabout which rows are deleted, and that does not seem like a\ngreat property for a data-updating command. (The same applies\nto UPDATE, which is why we don't allow these options in that\ncommand either.) The core issues are:\n\n* If the sort order is underspecified, or you omit ORDER BY\nentirely, then it's not clear which rows will be operated on.\nThe LIMIT might stop after just some of the rows in a peer\ngroup, and you can't predict which ones.\n\n* UPDATE/DELETE necessarily involve the equivalent of SELECT\nFOR UPDATE, which may cause the rows to be ordered more\nsurprisingly than you expected, ie the sort happens *before*\nrows are replaced by their latest versions, which might have\ndifferent sort keys.\n\nWe live with this amount of indeterminism in SELECT, but that\ndoesn't make it a brilliant idea to allow it in UPDATE/DELETE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Dec 2021 22:17:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "On Thursday, December 16, 2021, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n>\n> Also, here seem to be some use cases. For example,\n> - when you want to delete the specified number of rows from a table\n> that doesn't have a primary key and contains tuple duplicated.\n\n\nNot our problem…use the tools correctly; there is always the hack\nwork-around for the few who didn’t.\n\n\n> - when you want to delete the bottom 10 items with bad scores\n> (without using rank() window function).\n\n\nThis one doesn’t make sense to me.\n\n- when you want to delete only some of rows because it takes time\n> to delete all of them.\n>\n>\nThis seems potentially compelling though I’d be more concerned about the\nmemory aspects than simply taking a long amount of time. If this is a\nproblem then maybe discuss it without having a solution-in-hand? But given\nthe intense I/O cost that would happen spreading this out over time seems\nacceptable and it should be an infrequent thing to do. Expecting users to\nplan and execute some custom code for their specific need seems reasonable.\n\nSo even if Tom’s technical concerns aren’t enough working on this based\nupon these uses cases doesn’t seem of high enough benefit.\n\nDavid J.\n\nOn Thursday, December 16, 2021, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\nAlso, here seem to be some use cases. For example, \n- when you want to delete the specified number of rows from a table\n  that doesn't have a primary key and contains tuple duplicated.Not our problem…use the tools correctly; there is always the hack work-around for the few who didn’t. \n- when you want to delete the bottom 10 items with bad scores\n  (without using rank() window function).This one doesn’t make sense to me.\n- when you want to delete only some of rows because it takes time\n  to delete all of them.\nThis seems potentially compelling though I’d be more concerned about the memory aspects than simply taking a long amount of time.  If this is a problem then maybe discuss it without having a solution-in-hand?  But given the intense I/O cost that would happen spreading this out over time seems acceptable and it should be an infrequent thing to do.  Expecting users to plan and execute some custom code for their specific need seems reasonable.So even if Tom’s technical concerns aren’t enough working on this based upon these uses cases doesn’t seem of high enough benefit.David J.", "msg_date": "Thu, 16 Dec 2021 20:56:59 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "On Thu, 16 Dec 2021 22:17:58 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > We cannot use ORDER BY or LIMIT/OFFSET in the current\n> > DELETE statement syntax, so all the row matching the\n> > WHERE condition are deleted. However, the tuple retrieving\n> > process of DELETE is basically same as SELECT statement,\n> > so I think that we can also allow DELETE to use ORDER BY\n> > and LIMIT/OFFSET.\n> \n> Indeed, this is technically possible, but we've rejected the idea\n> before and I'm not aware of any reason to change our minds.\n> The problem is that a partial DELETE is not very deterministic\n> about which rows are deleted, and that does not seem like a\n> great property for a data-updating command. (The same applies\n> to UPDATE, which is why we don't allow these options in that\n> command either.) The core issues are:\n> \n> * If the sort order is underspecified, or you omit ORDER BY\n> entirely, then it's not clear which rows will be operated on.\n> The LIMIT might stop after just some of the rows in a peer\n> group, and you can't predict which ones.\n> \n> * UPDATE/DELETE necessarily involve the equivalent of SELECT\n> FOR UPDATE, which may cause the rows to be ordered more\n> surprisingly than you expected, ie the sort happens *before*\n> rows are replaced by their latest versions, which might have\n> different sort keys.\n> \n> We live with this amount of indeterminism in SELECT, but that\n> doesn't make it a brilliant idea to allow it in UPDATE/DELETE.\n\nThank you for your explaining it! \nI'm glad to understand why this idea is not good and has been rejected.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 17 Dec 2021 14:41:26 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "On Thu, 16 Dec 2021 20:56:59 -0700\n\"David G. Johnston\" <david.g.johnston@gmail.com> wrote:\n\n> On Thursday, December 16, 2021, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> >\n> > Also, here seem to be some use cases. For example,\n> > - when you want to delete the specified number of rows from a table\n> > that doesn't have a primary key and contains tuple duplicated.\n> \n> \n> Not our problem…use the tools correctly; there is always the hack\n> work-around for the few who didn’t.\n> \n> \n> > - when you want to delete the bottom 10 items with bad scores\n> > (without using rank() window function).\n> \n> \n> This one doesn’t make sense to me.\n> \n> - when you want to delete only some of rows because it takes time\n> > to delete all of them.\n> >\n> >\n> This seems potentially compelling though I’d be more concerned about the\n> memory aspects than simply taking a long amount of time. If this is a\n> problem then maybe discuss it without having a solution-in-hand? But given\n> the intense I/O cost that would happen spreading this out over time seems\n> acceptable and it should be an infrequent thing to do. Expecting users to\n> plan and execute some custom code for their specific need seems reasonable.\n> \n> So even if Tom’s technical concerns aren’t enough working on this based\n> upon these uses cases doesn’t seem of high enough benefit.\n\nThank you for your comments.\nOk. I agree that there are not so strong use cases.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 17 Dec 2021 14:56:58 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "On Thu, 16 Dec 2021 at 22:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * If the sort order is underspecified, or you omit ORDER BY\n> entirely, then it's not clear which rows will be operated on.\n> The LIMIT might stop after just some of the rows in a peer\n> group, and you can't predict which ones.\n\nMeh, that never seemed very compelling to me. I think that's on the\nuser and there are *lots* of cases where the user can easily know\nenough extra context to know that's what she wants. In particular I've\noften wanted to delete one of two identical records and it would have\nbeen really easy to just do a DELETE LIMIT 1. I know there are ways to\ndo it but it's always seemed like unnecessary hassle when there was a\nnatural way to express it.\n\n> * UPDATE/DELETE necessarily involve the equivalent of SELECT\n> FOR UPDATE, which may cause the rows to be ordered more\n> surprisingly than you expected, ie the sort happens *before*\n> rows are replaced by their latest versions, which might have\n> different sort keys.\n\nThis... is a real issue. Or is it? Just to be clear I think what\nyou're referring to is a case like:\n\nINSERT INTO t values (1),(2)\n\nIn another session: BEGIN; UPDATE t set c=0 where c=2\n\nDELETE FROM t ORDER BY c ASC LIMIT 1\n<blocks>\n\nIn other session: COMMIT\n\nWhich row was deleted? In this case it was the row where c=1. Even\nthough the UPDATE reported success (i.e. 1 row updated) so presumably\nit happened \"before\" the delete. The delete saw the ordering from\nbefore it was blocked and saw the ordering with c=1, c=2 so apparently\nit happened \"before\" the update.\n\nThere are plenty of other ways to see the same surprising\nserialization failure. If instead of a DELETE you did an UPDATE there\nare any number of functions or other features that have been added\nwhich can expose the order in which the updates happened.\n\nThe way to solve those cases would be to use serializable isolation\n(or even just repeatable read in this case). Wouldn't that also solve\nthe DELETE serialization failure too?\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 17 Dec 2021 01:40:45 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": "Hello Greg,\n\nOn Fri, 17 Dec 2021 01:40:45 -0500\nGreg Stark <stark@mit.edu> wrote:\n\n> On Thu, 16 Dec 2021 at 22:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > * If the sort order is underspecified, or you omit ORDER BY\n> > entirely, then it's not clear which rows will be operated on.\n> > The LIMIT might stop after just some of the rows in a peer\n> > group, and you can't predict which ones.\n> \n> Meh, that never seemed very compelling to me. I think that's on the\n> user and there are *lots* of cases where the user can easily know\n> enough extra context to know that's what she wants. In particular I've\n> often wanted to delete one of two identical records and it would have\n> been really easy to just do a DELETE LIMIT 1. I know there are ways to\n> do it but it's always seemed like unnecessary hassle when there was a\n> natural way to express it.\n\nOut of curiosity, could you please tell me the concrete situations\nwhere you wanted to delete one of two identical records?\n\n> > * UPDATE/DELETE necessarily involve the equivalent of SELECT\n> > FOR UPDATE, which may cause the rows to be ordered more\n> > surprisingly than you expected, ie the sort happens *before*\n> > rows are replaced by their latest versions, which might have\n> > different sort keys.\n> \n> This... is a real issue. Or is it? Just to be clear I think what\n> you're referring to is a case like:\n> \n> INSERT INTO t values (1),(2)\n> \n> In another session: BEGIN; UPDATE t set c=0 where c=2\n> \n> DELETE FROM t ORDER BY c ASC LIMIT 1\n> <blocks>\n> \n> In other session: COMMIT\n> \n> Which row was deleted? In this case it was the row where c=1. Even\n> though the UPDATE reported success (i.e. 1 row updated) so presumably\n> it happened \"before\" the delete. The delete saw the ordering from\n> before it was blocked and saw the ordering with c=1, c=2 so apparently\n> it happened \"before\" the update.\n\nWhen I tried it using my patch, the DELETE deletes the row where c=1\nas same as above, but it did not block. Is that the result of an\nexperiment using my patch or other RDBMS like MySQL?\n \n> There are plenty of other ways to see the same surprising\n> serialization failure. If instead of a DELETE you did an UPDATE there\n> are any number of functions or other features that have been added\n> which can expose the order in which the updates happened.\n> \n> The way to solve those cases would be to use serializable isolation\n> (or even just repeatable read in this case). Wouldn't that also solve\n> the DELETE serialization failure too?\n\nDo you mean such serialization failures would be avoided in\nSERIALIZABLE or REPATABLE READ by aborting the transaction, such \nserialization failures may be accepted in READ COMMITTED?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 20 Dec 2021 19:45:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" }, { "msg_contents": ">\n> Out of curiosity, could you please tell me the concrete situations\n> where you wanted to delete one of two identical records?\n>\n\nIn my case, there is a table with known duplicates, and we would like to\ndelete all but the one with the lowest ctid, and then add a unique index to\nthe table which then allows us to use INSERT ON CONFLICT in a meaningful\nway.\n\nThe other need for a DELETE...LIMIT or UPDATE...LIMIT is when you're\nworried about flooding a replica, so you parcel out the DML into chunks\nthat don't cause unacceptable lag on the replica.\n\nBoth of these can be accomplished via DELETE FROM foo WHERE ctid IN (\nSELECT ... FROM foo ... LIMIT 1000), but until recently such a construct\nwould result in a full table scan, and you'd take the same hit with each\nsubsequent DML.\n\nI *believe* that the ctid range scan now can limit those scans, especially\nif you can order the limited set by ctid, but those techniques are not\nwidely known at this time.\n\nOut of curiosity, could you please tell me the concrete situations\nwhere you wanted to delete one of two identical records?In my case, there is a table with known duplicates, and we would like to delete all but the one with the lowest ctid, and then add a unique index to the table which then allows us to use INSERT ON CONFLICT in a meaningful way.The other need for a DELETE...LIMIT or UPDATE...LIMIT is when you're worried about flooding a replica, so you parcel out the DML into chunks that don't cause unacceptable lag on the replica.Both of these can be accomplished via  DELETE FROM foo WHERE ctid IN ( SELECT ... FROM foo ... LIMIT 1000), but until recently such a construct would result in a full table scan, and you'd take the same hit with each subsequent DML.I believe that the ctid range scan now can limit those scans, especially if you can order the limited set by ctid, but those techniques are not widely known at this time.", "msg_date": "Mon, 20 Dec 2021 15:22:45 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow DELETE to use ORDER BY and LIMIT/OFFSET" } ]
[ { "msg_contents": "I want to mention that the 2nd problem I mentioned here is still broken.\nhttps://www.postgresql.org/message-id/20210717010259.GU20208@telsasoft.com\n\nIt happens if non-inheritted triggers on child and parent have the same name.\n\nOn Fri, Jul 16, 2021 at 08:02:59PM -0500, Justin Pryzby wrote:\n> On Fri, Jul 16, 2021 at 06:01:12PM -0400, Alvaro Herrera wrote:\n> > On 2021-Jul-16, Justin Pryzby wrote:\n> > > CREATE TABLE p(i int) PARTITION BY RANGE(i);\n> > > CREATE TABLE p1 PARTITION OF p FOR VALUES FROM (1)TO(2);\n> > > CREATE FUNCTION foo() returns trigger LANGUAGE plpgsql AS $$begin end$$;\n> > > CREATE TRIGGER x AFTER DELETE ON p1 EXECUTE FUNCTION foo();\n> > > CREATE TRIGGER x AFTER DELETE ON p EXECUTE FUNCTION foo();\n> > \n> > Hmm, interesting -- those statement triggers are not cloned, so what is\n> > going on here is just that the psql query to show them is tripping on\n> > its shoelaces ... I'll try to find a fix.\n> > \n> > I *think* the problem is that the query matches triggers by name and\n> > parent/child relationship; we're missing to ignore triggers by tgtype.\n> > It's not great design that tgtype is a bitmask of unrelated flags ...\n> \n> I see it's the subquery Amit wrote and proposed here:\n> https://www.postgresql.org/message-id/CA+HiwqEiMe0tCOoPOwjQrdH5fxnZccMR7oeW=f9FmgszJQbgFg@mail.gmail.com\n> \n> .. and I realize that I've accidentally succeeded in breaking what I first\n> attempted to break 15 months ago:\n> \n> On Mon, Apr 20, 2020 at 02:57:40PM -0500, Justin Pryzby wrote:\n> > I'm happy to see that this doesn't require a recursive cte, at least.\n> > I was trying to think how to break it by returning multiple results or results\n> > out of order, but I think that can't happen.\n> \n> If you assume that pg_partition_ancestors returns its results in order, I think\n> you can fix it by adding LIMIT 1. Otherwise I think you need a recursive CTE,\n> as I'd feared.\n> \n> Note also that I'd sent a patch to add newlines, to make psql -E look pretty.\n> v6-0001-fixups-c33869cc3bfc42bce822251f2fa1a2a346f86cc5.patch \n\n\n", "msg_date": "Fri, 17 Dec 2021 09:43:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "\\d with triggers: more than one row returned by a subquery used as\n an expression" }, { "msg_contents": "On Fri, Dec 17, 2021 at 09:43:56AM -0600, Justin Pryzby wrote:\n> I want to mention that the 2nd problem I mentioned here is still broken.\n> https://www.postgresql.org/message-id/20210717010259.GU20208@telsasoft.com\n> \n> It happens if non-inheritted triggers on child and parent have the same name.\n\nThis is the fix I was proposing\n\nIt depends on pg_partition_ancestors() to return its partitions in order:\nthis partition => parent => ... => root.", "msg_date": "Wed, 22 Dec 2021 15:03:08 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: \\d with triggers: more than one row returned by a subquery used\n as an expression" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Dec 17, 2021 at 09:43:56AM -0600, Justin Pryzby wrote:\n>> I want to mention that the 2nd problem I mentioned here is still broken.\n>> https://www.postgresql.org/message-id/20210717010259.GU20208@telsasoft.com\n>> It happens if non-inheritted triggers on child and parent have the same name.\n\n> This is the fix I was proposing\n> It depends on pg_partition_ancestors() to return its partitions in order:\n> this partition => parent => ... => root.\n\nI don't think that works at all. I might be willing to accept the\nassumption about pg_partition_ancestors()'s behavior, but you're also\nmaking an assumption about how the output of pg_partition_ancestors()\nis joined to \"pg_trigger AS u\", and I really don't think that's safe.\n\nISTM the real problem is the assumption that only related triggers could\nshare a tgname, which evidently isn't true. I think this query needs to\nactually match on tgparentid, rather than taking shortcuts. If we don't\nwant to use a recursive CTE, maybe we could define it as only looking up\nto the immediate parent, rather than necessarily finding the root?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jan 2022 17:02:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\d with triggers: more than one row returned by a subquery used\n as an expression" }, { "msg_contents": "On Mon, Jan 17, 2022 at 05:02:00PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Dec 17, 2021 at 09:43:56AM -0600, Justin Pryzby wrote:\n> >> I want to mention that the 2nd problem I mentioned here is still broken.\n> >> https://www.postgresql.org/message-id/20210717010259.GU20208@telsasoft.com\n> >> It happens if non-inheritted triggers on child and parent have the same name.\n> \n> > This is the fix I was proposing\n> > It depends on pg_partition_ancestors() to return its partitions in order:\n> > this partition => parent => ... => root.\n> \n> I don't think that works at all. I might be willing to accept the\n> assumption about pg_partition_ancestors()'s behavior, but you're also\n> making an assumption about how the output of pg_partition_ancestors()\n> is joined to \"pg_trigger AS u\", and I really don't think that's safe.\n\n> ISTM the real problem is the assumption that only related triggers could\n> share a tgname, which evidently isn't true. I think this query needs to\n> actually match on tgparentid, rather than taking shortcuts.\n\nI don't think that should be needed - tgparentid should match\npg_partition_ancestors().\n\n> If we don't\n> want to use a recursive CTE, maybe we could define it as only looking up to\n> the immediate parent, rather than necessarily finding the root?\n\nI think that defeats the intent of c33869cc3.\n\nIs there any reason why WITH ORDINALITY can't work ?\nThis is passing the smoke test.\n\n-- \nJustin", "msg_date": "Mon, 17 Jan 2022 18:08:24 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: \\d with triggers: more than one row returned by a subquery used\n as an expression" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Jan 17, 2022 at 05:02:00PM -0500, Tom Lane wrote:\n>> ISTM the real problem is the assumption that only related triggers could\n>> share a tgname, which evidently isn't true. I think this query needs to\n>> actually match on tgparentid, rather than taking shortcuts.\n\n> I don't think that should be needed - tgparentid should match\n> pg_partition_ancestors().\n\nUh, what? tgparentid is a trigger OID, not a relation OID.\n\n> Is there any reason why WITH ORDINALITY can't work ?\n> This is passing the smoke test.\n\nHow hard did you try to break it? It still seems to me that\nthis can be fooled by an unrelated trigger with the same tgname.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jan 2022 19:14:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\d with triggers: more than one row returned by a subquery used\n as an expression" }, { "msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> Is there any reason why WITH ORDINALITY can't work ?\n>> This is passing the smoke test.\n\n> How hard did you try to break it? It still seems to me that\n> this can be fooled by an unrelated trigger with the same tgname.\n\nHmm ... no, it does work, because we'll stop at the first trigger\nwith tgparentid = 0, so unrelated triggers further up the partition stack\ndon't matter. But this definitely requires commentary. (And I'm\nnot too happy with burying such a complicated thing inside a conditional\ninside a printf, either.) Will see about cleaning it up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jan 2022 19:50:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\d with triggers: more than one row returned by a subquery used\n as an expression" } ]
[ { "msg_contents": "Hi all!\n\nAs part of a customer project we are looking to implement an reloption \nfor views which when set, runs the subquery as invoked by the user \nrather than the view owner, as is currently the case.\nThe rewrite rule's table references are then checked as if the user were \nreferencing the table(s) directly.\n\nThis feature is similar to so-called 'SECURITY INVOKER' views in other DBMS.\nAlthough such permission checking could be implemented using views which \nSELECT from a table function and further using triggers, that approach \nhas obvious performance downsides.\n\nOur initial thought on implementing this was to simply add another \nreloption for views, just like the already existing `security_barrier`. \nWith this in place, we then can conditionally evaluate in \nRelationBuildRuleLock() if we need to call setRuleCheckAsUser() or not.\nThe new reloption has been named `security`, which is an enum currently \nonly supporting a single value: `relation_permissions`.\n\nThe code for fetching the rules and triggers in RelationBuildDesc() had \nto be moved after the parsing of the reloptions, since with this change \nRelationBuildRuleLock()now depends upon having relation->rd_options \navailable.\n\nThe current behavior of views without that new reloption set is unaltered.\nThis is implemented as such in patch 0001.\n\nRegression tests are included for both the new reloption of CREATE VIEW \nand the row level security side of this too, contained in patch 0002.\nAll regression tests are passing without errors.\n\nFinally, patch 0003 updates the documentation for this new reloption.\n\nAn simplified example on how this feature can be used could look like this:\n\n CREATE TABLE people (id int, name text, company text);\n ALTER TABLE people ENABLE ROW LEVEL SECURITY;\n INSERT INTO people VALUES (1, 'alice', 'foo'), (2, 'bob', 'bar');\n\n CREATE VIEW customers_no_security\n AS SELECT * FROM people;\n\n CREATE VIEW customers\n WITH (security=relation_permissions)\n AS SELECT * FROM people;\n\n -- We want carol to only see people from company 'foo'\n CREATE ROLE carol;\n CREATE POLICY company_foo_only\n ON people FOR ALL TO carol USING (company = 'foo');\n\n GRANT SELECT ON people TO carol;\n GRANT SELECT ON customers_no_security TO carol;\n GRANT SELECT ON customers TO carol;\n\nNow using these tables as carol:\n\n postgres=# SET ROLE carol;\n SET\n\nFor the `people` table, the policy is applied as expected:\n\n postgres=> SELECT * FROM people;\n id | name | company\n ----+-------+---------\n 1 | alice | foo\n (1 row)\n\nIf we now use the view with the new relopt set, the policy is applied too:\n\n postgres=> SELECT * FROM customers;\n id | name | company\n ----+-------+---------\n 1 | alice | foo\n (1 row)\n\nBut without the `security=relation_permissions` relopt, carol gets to \nsee data they should not be able to due to the policy not being applied, \nsince the rules are checked against the view owner:\n\n postgres=> SELECT * FROM customers_no_security;\n id | name | company\n ----+-------+---------\n 1 | alice | foo\n 2 | bob | bar\n (2 rows)\n\n\nExcluding regression tests and documentation, the changes boil down to this:\n src/backend/access/common/reloptions.c | 20\n src/backend/nodes/copyfuncs.c | 1\n src/backend/nodes/equalfuncs.c | 1\n src/backend/nodes/outfuncs.c | 1\n src/backend/nodes/readfuncs.c | 1\n src/backend/optimizer/plan/subselect.c | 1\n src/backend/optimizer/prep/prepjointree.c | 1\n src/backend/rewrite/rewriteHandler.c | 1\n src/backend/utils/cache/relcache.c | 62\n src/include/nodes/parsenodes.h | 3\n src/include/utils/rel.h | 21\n 11 files changed, 84 insertions(+), 29 deletions(-)\n\nAll patches are against current master.\n\nThanks,\nChristoph Heiss", "msg_date": "Fri, 17 Dec 2021 18:31:26 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "[PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Fri, 2021-12-17 at 18:31 +0100, Christoph Heiss wrote:\n> As part of a customer project we are looking to implement an reloption \n> for views which when set, runs the subquery as invoked by the user \n> rather than the view owner, as is currently the case.\n> The rewrite rule's table references are then checked as if the user were \n> referencing the table(s) directly.\n> \n> This feature is similar to so-called 'SECURITY INVOKER' views in other DBMS.\n> Although such permission checking could be implemented using views which \n> SELECT from a table function and further using triggers, that approach \n> has obvious performance downsides.\n\nThis has been requested before, see for example\nhttps://stackoverflow.com/q/33858030/6464308\n\nRow Level Security is only one use case; there may be other situations\nwhen it is useful to check permissions on the underlying objects with\nthe current user rather than with the view owner.\n\n> Our initial thought on implementing this was to simply add another \n> reloption for views, just like the already existing `security_barrier`. \n> With this in place, we then can conditionally evaluate in \n> RelationBuildRuleLock() if we need to call setRuleCheckAsUser() or not.\n> The new reloption has been named `security`, which is an enum currently \n> only supporting a single value: `relation_permissions`.\n\n\nYou made that an enum with only a single value.\nWhat other values could you imagine in the future?\n\nI think that this should be a boolean reloption, for example \"security_definer\".\nIf unset or set to \"off\", you would get the current behavior.\n\n\n> Finally, patch 0003 updates the documentation for this new reloption.\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 64d9030652..760ea2f794 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -2292,6 +2292,10 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw;\n are not subject to row security.\n </para>\n \n+ <para>\n+ For views, the policies are applied as being referenced through the view owner by default, rather than the user referencing the view. To apply row security policies as defined for the invoking\nuser, the <firstterm>security</firstterm> option can be set on views (see <link linkend=\"sql-createview\">CREATE VIEW</link>) to get the same behavior.\n+ </para>\n+\n <para>\n Row security policies can be specific to commands, or to roles, or to\n both. A policy can be specified to apply to <literal>ALL</literal>\n\nPlease avoid long lines like that. Also, I don't think that the documentation on\nRLS policies is the correct place for this. It should be on a page dedicated to views\nor permissions.\n\nThe CREATE VIEW page already has a paragraph about this, starting with\n\"Access to tables referenced in the view is determined by permissions of the view owner.\"\nThis looks like the best place to me (and it would need to be adapted anyway).\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 11 Jan 2022 19:59:13 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi Laurenz,\n\nthanks for the review!\nI've attached a v2 where I addressed the things you mentioned.\n\nOn 1/11/22 19:59, Laurenz Albe wrote:\n> [..]\n> \n> You made that an enum with only a single value.\n> What other values could you imagine in the future?\n> \n> I think that this should be a boolean reloption, for example \"security_definer\".\n> If unset or set to \"off\", you would get the current behavior.\n\nA boolean option would have been indeed the better choice, I agree.\nI haven't though of any specific other values for this enum, it was \nrather a decision following a off-list discussion.\n\nI've changed the option to be boolean and renamed it to \n\"security_invoker\". This puts it in line with how other systems (e.g. \nMySQL) name their equivalent feature, so I think this should be an \nappropriate choice.\n\n> \n>> Finally, patch 0003 updates the documentation for this new reloption.\n> \n> [..]\n> \n> Please avoid long lines like that. \n\nFixed.\n\n> Also, I don't think that the documentation on\n> RLS policies is the correct place for this. It should be on a page dedicated to views\n> or permissions.\n> \n> The CREATE VIEW page already has a paragraph about this, starting with\n> \"Access to tables referenced in the view is determined by permissions of the view owner.\"\n> This looks like the best place to me (and it would need to be adapted anyway).\nIt makes sense to put it there, thanks for the pointer! I wasn't really \nthat sure where to put the documentation to start with, and this seems \nlike a more appropriate place.\n\nPlease review further.\n\nThanks,\nChristoph Heiss", "msg_date": "Tue, 18 Jan 2022 16:16:53 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 18, 2022 at 04:16:53PM +0100, Christoph Heiss wrote:\n> \n> I've attached a v2 where I addressed the things you mentioned.\n\nThis version unfortunately doesn't apply anymore:\nhttp://cfbot.cputube.org/patch_36_3466.log\n=== Applying patches on top of PostgreSQL commit ID e0e567a106726f6709601ee7cffe73eb6da8084e ===\n=== applying patch ./0001-PATCH-v2-1-3-Add-new-boolean-reloption-security_invo.patch\n=== applying patch ./0002-PATCH-v2-2-3-Add-regression-tests-for-new-security_i.patch\npatching file src/test/regress/expected/create_view.out\nHunk #5 FAILED at 2019.\nHunk #6 succeeded at 2056 (offset 16 lines).\n1 out of 6 hunks FAILED -- saving rejects to file src/test/regress/expected/create_view.out.rej\n\nCould you send a rebased version?\n\n\n", "msg_date": "Wed, 19 Jan 2022 16:30:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi,\n\nOn 1/19/22 09:30, Julien Rouhaud wrote:\n> Hi,\n> \n> On Tue, Jan 18, 2022 at 04:16:53PM +0100, Christoph Heiss wrote:\n>>\n>> I've attached a v2 where I addressed the things you mentioned.\n> \n> This version unfortunately doesn't apply anymore:\n> http://cfbot.cputube.org/patch_36_3466.log\n> === Applying patches on top of PostgreSQL commit ID e0e567a106726f6709601ee7cffe73eb6da8084e ===\n> === applying patch ./0001-PATCH-v2-1-3-Add-new-boolean-reloption-security_invo.patch\n> === applying patch ./0002-PATCH-v2-2-3-Add-regression-tests-for-new-security_i.patch\n> patching file src/test/regress/expected/create_view.out\n> Hunk #5 FAILED at 2019.\n> Hunk #6 succeeded at 2056 (offset 16 lines).\n> 1 out of 6 hunks FAILED -- saving rejects to file src/test/regress/expected/create_view.out.rej\n> \n> Could you send a rebased version?\n\nMy bad - I attached a new version rebased on latest master.\n\nThanks,\nChristoph Heiss", "msg_date": "Wed, 19 Jan 2022 13:11:05 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 2022-01-18 at 16:16 +0100, Christoph Heiss wrote:\n> > I think that this should be a boolean reloption, for example \"security_definer\".\n> > If unset or set to \"off\", you would get the current behavior.\n> \n> A boolean option would have been indeed the better choice, I agree.\n> I haven't though of any specific other values for this enum, it was \n> rather a decision following a off-list discussion.\n> \n> I've changed the option to be boolean and renamed it to \n> \"security_invoker\". This puts it in line with how other systems (e.g. \n> MySQL) name their equivalent feature, so I think this should be an \n> appropriate choice.\n> \n> \n> > Also, I don't think that the documentation on\n> > RLS policies is the correct place for this.  It should be on a page dedicated to views\n> > or permissions.\n> > \n> > The CREATE VIEW page already has a paragraph about this, starting with\n> > \"Access to tables referenced in the view is determined by permissions of the view owner.\"\n> > This looks like the best place to me (and it would need to be adapted anyway).\n> It makes sense to put it there, thanks for the pointer! I wasn't really \n> that sure where to put the documentation to start with, and this seems \n> like a more appropriate place.\n\nI gave the new patch a spin, and got a surprising result:\n\n CREATE TABLE tab (id integer);\n\n CREATE ROLE duff LOGIN;\n\n CREATE ROLE jock LOGIN;\n\n GRANT INSERT, UPDATE, DELETE ON tab TO jock;\n\n GRANT SELECT ON tab TO duff;\n\n CREATE VIEW v WITH (security_invoker = TRUE) AS SELECT * FROM tab;\n\n ALTER VIEW v OWNER TO jock;\n\n GRANT SELECT, INSERT, UPDATE, DELETE ON v TO duff;\n\n SET SESSION AUTHORIZATION duff;\n\n SELECT * FROM v;\n id \n ════\n (0 rows)\n\nThat's ok, \"duff\" has permissions to read \"tab\".\n\n INSERT INTO v VALUES (1);\n INSERT 0 1\n\nHuh? \"duff\" has no permission to insert into \"tab\"!\n\n RESET SESSION AUTHORIZATION;\n\n ALTER VIEW v SET (security_invoker = FALSE);\n\n SET SESSION AUTHORIZATION duff;\n\n SELECT * FROM v;\n ERROR: permission denied for table tab\n\nAs expected.\n\n INSERT INTO v VALUES (1);\n INSERT 0 1\n\nAs expected.\n\n\nAbout the documentation:\n\n--- a/doc/src/sgml/ref/create_view.sgml\n+++ b/doc/src/sgml/ref/create_view.sgml\n+ <varlistentry>\n+ <term><literal>security_invoker</literal> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ If this option is set, it will cause all access to the underlying\n+ tables to be checked as referenced by the invoking user, rather than\n+ the view owner. This will only take effect when row level security is\n+ enabled on the underlying tables (using <link linkend=\"sql-altertable\">\n+ <command>ALTER TABLE ... ENABLE ROW LEVEL SECURITY</command></link>).\n+ </para>\n\nWhy should this *only* take effect if (not \"when\") RLS is enabled?\nThe above test shows that there is an effect even without RLS.\n\n+ <para>This option can be changed on existing views using <link\n+ linkend=\"sql-alterview\"><command>ALTER VIEW</command></link>. See\n+ <xref linkend=\"ddl-rowsecurity\"/> for more details on row level security.\n+ </para>\n\nI don't think that it is necessary to mention that this can be changed with\nALTER VIEW - all storage parameters can be. I guess you copied that from\nthe \"check_option\" documentation, but I would say it need not be mentioned\nthere either.\n\n+ <para>\n+ If the <firstterm>security_invoker</firstterm> option is set on the view,\n+ access to tables is determined by permissions of the invoking user, rather\n+ than the view owner. This can be used to provide stricter permission\n+ checking to the underlying tables than by default.\n </para>\n\nSince you are talking about use cases here, RLS might deserve a mention.\n\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n+ {\n+ {\n+ \"security_invoker\",\n+ \"View subquery in invoked within the current security context.\",\n+ RELOPT_KIND_VIEW,\n+ AccessExclusiveLock\n+ },\n+ false\n+ },\n\nThat doesn't seem to be proper English.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 20 Jan 2022 15:20:52 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi Laurenz,\n\nthank you again for the review!\n\nOn 1/20/22 15:20, Laurenz Albe wrote:\n> [..]\n> I gave the new patch a spin, and got a surprising result:\n> \n> [..]\n> \n> INSERT INTO v VALUES (1);\n> INSERT 0 1\n> \n> Huh? \"duff\" has no permission to insert into \"tab\"!\nThat really should not happen, thanks for finding that and helping me \ninvestigating on how to fix that!\n\nThis is now solved by checking the security_invoker property on the view \nin rewriteTargetView().\n\nI've also added a testcase for this in v4 to catch that in future.\n\n> \n> [..]\n> \n> About the documentation:\n> \n> --- a/doc/src/sgml/ref/create_view.sgml\n> +++ b/doc/src/sgml/ref/create_view.sgml\n> + <varlistentry>\n> + <term><literal>security_invoker</literal> (<type>boolean</type>)</term>\n> + <listitem>\n> + <para>\n> + If this option is set, it will cause all access to the underlying\n> + tables to be checked as referenced by the invoking user, rather than\n> + the view owner. This will only take effect when row level security is\n> + enabled on the underlying tables (using <link linkend=\"sql-altertable\">\n> + <command>ALTER TABLE ... ENABLE ROW LEVEL SECURITY</command></link>).\n> + </para>\n> \n> Why should this *only* take effect if (not \"when\") RLS is enabled?\n> The above test shows that there is an effect even without RLS.\n> \n> + <para>This option can be changed on existing views using <link\n> + linkend=\"sql-alterview\"><command>ALTER VIEW</command></link>. See\n> + <xref linkend=\"ddl-rowsecurity\"/> for more details on row level security.\n> + </para>\n> \n> I don't think that it is necessary to mention that this can be changed with\n> ALTER VIEW - all storage parameters can be. I guess you copied that from\n> the \"check_option\" documentation, but I would say it need not be mentioned\n> there either.\nExactly, I tried to fit it in with the existing parameters.\nI moved the link to ALTER VIEW to the end of the paragraph, as it \napplies to all options anyways.\n\n> \n> + <para>\n> + If the <firstterm>security_invoker</firstterm> option is set on the view,\n> + access to tables is determined by permissions of the invoking user, rather\n> + than the view owner. This can be used to provide stricter permission\n> + checking to the underlying tables than by default.\n> </para>\n> \n> Since you are talking about use cases here, RLS might deserve a mention.\nExpanded upon a little bit in v4.\n\n> \n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> + {\n> + {\n> + \"security_invoker\",\n> + \"View subquery in invoked within the current security context.\",\n> + RELOPT_KIND_VIEW,\n> + AccessExclusiveLock\n> + },\n> + false\n> + },\n> \n> That doesn't seem to be proper English.\nYes, that happened when rewriting this for v1 -> v2.\nFixed.\n\nThanks,\nChristoph Heiss", "msg_date": "Wed, 2 Feb 2022 18:23:18 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Wed, 2022-02-02 at 18:23 +0100, Christoph Heiss wrote:\n> > Huh?  \"duff\" has no permission to insert into \"tab\"!\n> That really should not happen, thanks for finding that and helping me \n> investigating on how to fix that!\n> \n> This is now solved by checking the security_invoker property on the view \n> in rewriteTargetView().\n> \n> I've also added a testcase for this in v4 to catch that in future.\n\nI tested it, and the patch works fine now.\n\nSome little comments:\n\n> --- a/src/backend/rewrite/rewriteHandler.c\n> +++ b/src/backend/rewrite/rewriteHandler.c\n> @@ -3242,9 +3243,13 @@ rewriteTargetView(Query *parsetree, Relation view)\n> 0);\n> \n> /*\n> - * Mark the new target RTE for the permissions checks that we want to\n> - * enforce against the view owner, as distinct from the query caller. At\n> - * the relation level, require the same INSERT/UPDATE/DELETE permissions\n> + * If the view has security_invoker set, mark the new target RTE for the\n> + * permissions checks that we want to enforce against the query caller, as\n> + * distince from the view owner.\n\nTypo: distince\n\ndiff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out\nindex 509e930fc7..fea893569f 100644\n--- a/src/test/regress/expected/create_view.out\n+++ b/src/test/regress/expected/create_view.out\n@@ -261,15 +261,26 @@ CREATE VIEW mysecview3 WITH (security_barrier=false)\n AS SELECT * FROM tbl1 WHERE a < 0;\n CREATE VIEW mysecview4 WITH (security_barrier)\n AS SELECT * FROM tbl1 WHERE a <> 0;\n-CREATE VIEW mysecview5 WITH (security_barrier=100) -- Error\n+CREATE VIEW mysecview5 WITH (security_invoker=true)\n+ AS SELECT * FROM tbl1 WHERE a = 100;\n+CREATE VIEW mysecview6 WITH (security_invoker=false)\n AS SELECT * FROM tbl1 WHERE a > 100;\n+CREATE VIEW mysecview7 WITH (security_invoker)\n+ AS SELECT * FROM tbl1 WHERE a < 100;\n+CREATE VIEW mysecview8 WITH (security_barrier=100) -- Error\n+ AS SELECT * FROM tbl1 WHERE a <> 100;\n ERROR: invalid value for boolean option \"security_barrier\": 100\n-CREATE VIEW mysecview6 WITH (invalid_option) -- Error\n+CREATE VIEW mysecview9 WITH (security_invoker=100) -- Error\n+ AS SELECT * FROM tbl1 WHERE a = 100;\n+ERROR: invalid value for boolean option \"security_invoker\": 100\n+CREATE VIEW mysecview10 WITH (invalid_option) -- Error\n\nI see no reasons to remove two of the existing tests.\n\n+++ b/src/test/regress/expected/rowsecurity.out\n@@ -8,9 +8,11 @@ DROP USER IF EXISTS regress_rls_alice;\n DROP USER IF EXISTS regress_rls_bob;\n DROP USER IF EXISTS regress_rls_carol;\n DROP USER IF EXISTS regress_rls_dave;\n+DROP USER IF EXISTS regress_rls_grace;\n\nBut the name has to start with \"e\"!\n\n\nI also see no reason to split a small patch like this into three parts.\n\nIn the attached, I dealt with the above and went over the comments.\nHow do you like it?\n\nYours,\nLaurenz Albe\n>", "msg_date": "Fri, 04 Feb 2022 17:09:44 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Christoph Heiss wrote:\n> As part of a customer project we are looking to implement an reloption for views which when set, runs the subquery as invoked by the user rather than the view owner, as is currently the case.\n> The rewrite rule's table references are then checked as if the user were referencing the table(s) directly.\n> \n> This feature is similar to so-called 'SECURITY INVOKER' views in other DBMS. \n\nThis is a feature I have long been looking for. I tested the patch (v5) \nand found two cases that I feel need to be either fixed or documented \nexplicitly.\n\n\nCase 1 - Schema privileges:\n\ncreate schema a;\ncreate table a.t();\n\ncreate schema b;\ncreate view b.v with (security_invoker=true) as table a.t;\n\ncreate role alice;\ngrant usage on schema b to alice; -- missing schema a\ngrant select on table a.t, b.v to alice;\n\nset role alice;\ntable a.t; -- ERROR: permission denied for schema a (good)\ntable b.v; -- no error (good or bad?)\n\nUser alice does not have USAGE privileges on schema a, but only on table \na.t. A SELECT directly on the table fails as expected, but a SELECT on \nthe view succeeds. I assume the schema access is checked when the query \nis parsed - and at that stage, the user is still the view owner?\nThe docs mention explicitly that *all* objects are accessed with invoker \nprivileges, which is not the case.\n\nPersonally I actually like this. It allows to keep a view-based api in a \nseparate schema, while:\n- preserving full RLS capabilities and\n- forcing the user to go through the api, because a direct access to the \ndata schema is not possible.\n\nHowever, since this behavior was likely unintended until now, it raises \nthe question whether there are any other privilege checks that are not \ntaking the invoking user into account properly?\n\n\nCase 2 - Chained views:\n\ncreate schema a;\ncreate table a.t();\n\ncreate role bob;\ngrant create on database postgres to bob;\ngrant usage on schema a to bob;\nset role bob;\ncreate schema b;\ncreate view b.v1 with (security_invoker=true) as table a.t;\ncreate view b.v2 with (security_invoker=false) as table b.v1;\n\nreset role;\ncreate role alice;\ngrant usage on schema a, b to alice;\ngrant select on table a.t to bob;\ngrant select on table b.v2 to alice;\n\nset role alice;\ntable b.v2; -- ERROR: permission denied for table t (bad)\n\nWhen alice runs the SELECT on b.v2, the query on b.v1 is made with bob \nprivileges as the view owner of b.v2. This is verified, because alice \ndoes not have privileges to access b.v1, but no such error is thrown.\n\nb.v1 will then access a.t - and my first assumption was, that in this \ncase a.t should be accessed by bob, still as the view owner of b.v2. \nClearly, this is not the case as the permission denied error shows.\n\nThis is not actually a problem with this patch, I think, but just \nhighlighting a quirk in the current implementation of views \n(security_invoker=false) in general: While the query will be run with \nthe view owner, the CURRENT_USER is still the invoker, even \"after\" the \nview. In other words, the current implementation is *not* the same as \n\"security definer\". It's somewhere between \"security definer\" and \n\"security invoker\" - a strange mix really.\n\nAfaik this mix is not documented explicitly so far. But the \nsecurity_invoker reloption exposes it in a much less expected way, so I \nonly see two options really:\na) make the current implementation of security_invoker=false a true \n\"security definer\", i.e. change the CURRENT_USER \"after\" the view for good.\nb) document the \"security infiner/devoker\" default behavior as a feature.\n\nI really like a), as this would make a clear cut between security \ndefiner and security invoker views - but this would be a big breaking \nchange, which I don't think is acceptable.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 4 Feb 2022 22:28:51 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Fri, 2022-02-04 at 22:28 +0100, walther@technowledgy.de wrote:\n> This is a feature I have long been looking for. I tested the patch (v5) \n> and found two cases that I feel need to be either fixed or documented \n> explicitly.\n\nThanks for testing and weighing in!\n\n> Case 1 - Schema privileges:\n> \n> create schema a;\n> create table a.t();\n> \n> create schema b;\n> create view b.v with (security_invoker=true) as table a.t;\n> \n> create role alice;\n> grant usage on schema b to alice; -- missing schema a\n> grant select on table a.t, b.v to alice;\n> \n> set role alice;\n> table a.t; -- ERROR: permission denied for schema a (good)\n> table b.v; -- no error (good or bad?)\n> \n> User alice does not have USAGE privileges on schema a, but only on table \n> a.t. A SELECT directly on the table fails as expected, but a SELECT on \n> the view succeeds. I assume the schema access is checked when the query \n> is parsed - and at that stage, the user is still the view owner?\n> The docs mention explicitly that *all* objects are accessed with invoker \n> privileges, which is not the case.\n> \n> Personally I actually like this. It allows to keep a view-based api in a \n> separate schema, while:\n> - preserving full RLS capabilities and\n> - forcing the user to go through the api, because a direct access to the \n> data schema is not possible.\n> \n> However, since this behavior was likely unintended until now, it raises \n> the question whether there are any other privilege checks that are not \n> taking the invoking user into account properly?\n\nThis behavior is not new:\n\n CREATE SCHEMA viewtest;\n\n CREATE ROLE duff LOGIN;\n CREATE ROLE jock LOGIN;\n\n CREATE TABLE viewtest.tab (id integer);\n GRANT SELECT ON viewtest.tab TO duff;\n\n CREATE VIEW v AS SELECT * FROM viewtest.tab;\n ALTER VIEW v OWNER TO duff;\n GRANT SELECT ON v TO jock;\n\n SET ROLE jock;\n\n SELECT * FROM v;\n id \n ════\n (0 rows)\n\nSo even though the view owner \"duff\" has no permissions\non the schema \"viewtest\", we can still select from the table.\nPermissions on the schema containing the table are not\nchecked, only permissions on the table itself.\n\nI am not sure how to feel about this. It is not what I would have\nexpected, but changing it would be a compatibility break.\nShould this be considered a live bug in PostgreSQL?\n\nIf not, I don't know if it is the business of this patch to\nchange the behavior.\n\n> Case 2 - Chained views:\n> \n> create schema a;\n> create table a.t();\n> \n> create role bob;\n> grant create on database postgres to bob;\n> grant usage on schema a to bob;\n> set role bob;\n> create schema b;\n> create view b.v1 with (security_invoker=true) as table a.t;\n> create view b.v2 with (security_invoker=false) as table b.v1;\n> \n> reset role;\n> create role alice;\n> grant usage on schema a, b to alice;\n> grant select on table a.t to bob;\n> grant select on table b.v2 to alice;\n> \n> set role alice;\n> table b.v2; -- ERROR: permission denied for table t (bad)\n> \n> When alice runs the SELECT on b.v2, the query on b.v1 is made with bob \n> privileges as the view owner of b.v2. This is verified, because alice \n> does not have privileges to access b.v1, but no such error is thrown.\n> \n> b.v1 will then access a.t - and my first assumption was, that in this \n> case a.t should be accessed by bob, still as the view owner of b.v2. \n> Clearly, this is not the case as the permission denied error shows.\n> \n> This is not actually a problem with this patch, I think, but just \n> highlighting a quirk in the current implementation of views \n> (security_invoker=false) in general: While the query will be run with \n> the view owner, the CURRENT_USER is still the invoker, even \"after\" the \n> view. In other words, the current implementation is *not* the same as \n> \"security definer\". It's somewhere between \"security definer\" and \n> \"security invoker\" - a strange mix really.\n\nRight. Even though permissions on \"v1\" are checked for user \"bob\",\npermissions on the table are checked for the current user, which remains\n\"alice\".\n\nI agree that the name \"security_invoker\" is suggestive of SECURITY INVOKER\nin CREATE FUNCTION, but the behavior is different.\nPerhaps the solution is as simple as choosing a different name that does\nnot prompt this association, for example \"permissions_invoker\".\n\n> Afaik this mix is not documented explicitly so far. But the \n> security_invoker reloption exposes it in a much less expected way, so I \n> only see two options really:\n> a) make the current implementation of security_invoker=false a true \n> \"security definer\", i.e. change the CURRENT_USER \"after\" the view for good.\n> b) document the \"security infiner/devoker\" default behavior as a feature.\n> \n> I really like a), as this would make a clear cut between security \n> definer and security invoker views - but this would be a big breaking \n> change, which I don't think is acceptable.\n\nI agree that changing the current behavior is not acceptable.\n\nI guess more documentation how this works would be a good idea.\nNot sure if this is the job of this patch, but since it exposes this\nin new ways, it might as well clarify how all this works.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 09 Feb 2022 17:06:08 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Laurenz Albe:\n> So even though the view owner \"duff\" has no permissions\n> on the schema \"viewtest\", we can still select from the table.\n> Permissions on the schema containing the table are not\n> checked, only permissions on the table itself.\n> \n> [...]\n> \n> If not, I don't know if it is the business of this patch to\n> change the behavior.\n\nAh, good find. In that case, I suggest to change the docs slightly to \nsay that the schema will not be checked.\n\nIn one place it's described as \"it will cause all access to the \nunderlying tables to be checked as ...\" which is fine, I think. But in \nanother place it's \"access to tables, functions and *other objects* \nreferenced in the view, ...\" which is misleading.\n\n> I agree that the name \"security_invoker\" is suggestive of SECURITY INVOKER\n> in CREATE FUNCTION, but the behavior is different.\n> Perhaps the solution is as simple as choosing a different name that does\n> not prompt this association, for example \"permissions_invoker\".\n\nYes, given that there is not much that can be done about the \nfunctionality anymore, a different name would be better. This would also \navoid the implicit \"if security_invoker=false, the view behaves like \nSECURITY DEFINER\" association, which is also clearly wrong. And this \nassumption is actually what made me think the chained views example was \nsomehow off.\n\nI am not convinced \"permissions_invoker\" is much better, though. The \ndifference between SECURITY INVOKER and SECURITY DEFINER is invoker vs \ndefiner... where, I think, we need something else to describe what we \ncurrently have and what the patch provides.\n\nMaybe we can look at it from the other perspective: Both ways of \noperating keep the CURRENT_USER the same, pretty much like what we \nunderstand \"security invoker\" should do. The difference, however, is the \ncurrent default in which the permissions are checked with the view \n*owner*. Let's treat this difference as the thing that can be set: \nsecurity_owner=true|false. Or run_as_owner=true|false.\n\nxxx_owner=true would be the default and xxx_owner=false could be set \nexplicitly to get the behavior we are looking for in this patch?\n\n\n> I guess more documentation how this works would be a good idea.\n> [...] but since it exposes this\n> in new ways, it might as well clarify how all this works.\n\n+1\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Wed, 9 Feb 2022 17:40:01 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi all,\n\nagain, many thanks for the reviews and testing!\n\nOn 2/4/22 17:09, Laurenz Albe wrote:\n> I also see no reason to split a small patch like this into three parts.\nI've split it into the three unrelated parts (code, docs, tests) to ease \nreview, but I happily carry it as one patch too.\n\n> In the attached, I dealt with the above and went over the comments.\n> How do you like it?\n\nThat is really nice, I used it to base v6 on.\n\nOn 2/9/22 17:40, walther@technowledgy.de wrote:\n> Ah, good find. In that case, I suggest to change the docs slightly to \n> say that the schema will not be checked.\n> \n> In one place it's described as \"it will cause all access to the \n> underlying tables to be checked as ...\" which is fine, I think. But in \n> another place it's \"access to tables, functions and *other objects* \n> referenced in the view, ...\" which is misleading\nI removed the reference to \"other objects\" for now in v6.\n\n>> I agree that the name \"security_invoker\" is suggestive of SECURITY \n>> INVOKER\n>> in CREATE FUNCTION, but the behavior is different.\n>> Perhaps the solution is as simple as choosing a different name that does\n>> not prompt this association, for example \"permissions_invoker\".\n> \n> Yes, given that there is not much that can be done about the \n> functionality anymore, a different name would be better. This would also \n> avoid the implicit \"if security_invoker=false, the view behaves like \n> SECURITY DEFINER\" association, which is also clearly wrong. And this \n> assumption is actually what made me think the chained views example was \n> somehow off.\n> \n> I am not convinced \"permissions_invoker\" is much better, though. The \n> difference between SECURITY INVOKER and SECURITY DEFINER is invoker vs \n> definer... where, I think, we need something else to describe what we \n> currently have and what the patch provides.\n> \n> Maybe we can look at it from the other perspective: Both ways of \n> operating keep the CURRENT_USER the same, pretty much like what we \n> understand \"security invoker\" should do. The difference, however, is the \n> current default in which the permissions are checked with the view \n> *owner*. Let's treat this difference as the thing that can be set: \n> security_owner=true|false. Or run_as_owner=true|false.\n> \n> xxx_owner=true would be the default and xxx_owner=false could be set \n> explicitly to get the behavior we are looking for in this patch?\n\nI'm not sure if an option which is on by default would be best, IMHO. I \nwould rather have an off-by-default option, so that you explicitly have \nto turn *on* that behavior rather than turning *off* the current.\n\n[ Pretty much bike-shedding here, but if the agreement comes to one of \n\"xxx_owner\" I won't mind it either. ]\n\nMy best suggestions is maybe something like run_as_invoker=t|f, but that \nwould probably raise the same \"invoker vs definer\" association.\n\nI left it for now as-is.\n\n>> I guess more documentation how this works would be a good idea.\n>> [...] but since it exposes this\n>> in new ways, it might as well clarify how all this works.\n\nI tried to clarify this situation in the documentation in a concise \nmatter, I'd appreciate further feedback on that.\n\nThanks,\nChristoph Heiss", "msg_date": "Mon, 14 Feb 2022 18:00:11 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Laurenz Albe:\n> So even though the view owner \"duff\" has no permissions\n> on the schema \"viewtest\", we can still select from the table.\n> Permissions on the schema containing the table are not\n> checked, only permissions on the table itself.\n> \n> I am not sure how to feel about this. It is not what I would have\n> expected, but changing it would be a compatibility break.\n> Should this be considered a live bug in PostgreSQL?\n\nI now found the docs to say:\n\n\nUSAGE:\nFor schemas, allows access to objects contained in the schema (assuming \nthat the objects' own privilege requirements are also met). Essentially \nthis allows the grantee to “look up” objects within the schema. Without \nthis permission, it is still possible to see the object names, e.g., by \nquerying system catalogs. Also, after revoking this permission, existing \nsessions might have statements that have previously performed this \nlookup, so this is not a completely secure way to prevent object access.\n\n\nSo, this seems to be perfectly fine.\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Tue, 15 Feb 2022 09:24:28 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Christoph Heiss:\n>> xxx_owner=true would be the default and xxx_owner=false could be set \n>> explicitly to get the behavior we are looking for in this patch?\n> \n> I'm not sure if an option which is on by default would be best, IMHO. I \n> would rather have an off-by-default option, so that you explicitly have \n> to turn *on* that behavior rather than turning *off* the current.\n\nJust out of curiosity I asked myself whether there were any other \nboolean options that default to true in postgres - and there are plenty. \n./configure options, client connection settings, server config options, \netc - but also some SQL statements:\n- CREATE USER defaults to LOGIN\n- CREATE ROLE defaults to INHERIT\n- CREATE COLLATION defaults to DETERMINISTIC=true\n\nThere's even reloptions, that do, e.g. vacuum_truncate.\n\n\n> My best suggestions is maybe something like run_as_invoker=t|f, but that \n> would probably raise the same \"invoker vs definer\" association.\n\nIt is slightly better, I agree. But, yes, that same association is \nraised easily. The more I think about it, the more it becomes clear that \nreally the current default behavior of \"running the query as the view \nowner\" is the special thing here, not the behavior you are introducing.\n\nIf we were to start from scratch, it would be pretty obvious - to me - \nthat run_as_owner=false would be the default, and the run_as_owner=true \nwould need to be turned on explicitly. I'm thinking about \"run_as_owner\" \nas the better design and \"defaults to true\" as a backwards compatibility \nthing.\n\nBut yeah, it would be good to hear other opinions on that, too.\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Tue, 15 Feb 2022 09:37:54 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Hi,\n\nOn 2/15/22 09:37, walther@technowledgy.de wrote:\n> Christoph Heiss:\n>>> xxx_owner=true would be the default and xxx_owner=false could be set \n>>> explicitly to get the behavior we are looking for in this patch?\n>>\n>> I'm not sure if an option which is on by default would be best, IMHO. \n>> I would rather have an off-by-default option, so that you explicitly \n>> have to turn *on* that behavior rather than turning *off* the current.\n> \n> Just out of curiosity I asked myself whether there were any other \n> boolean options that default to true in postgres - and there are plenty. \n> ./configure options, client connection settings, server config options, \n> etc - but also some SQL statements:\n> - CREATE USER defaults to LOGIN\n> - CREATE ROLE defaults to INHERIT\n> - CREATE COLLATION defaults to DETERMINISTIC=true\n> \n> There's even reloptions, that do, e.g. vacuum_truncate.\n\nKnowing that I happily drop my objection about that. :^)\n\n> [..] The more I think about it, the more it becomes clear that \n> really the current default behavior of \"running the query as the view \n> owner\" is the special thing here, not the behavior you are introducing.\n> \n> If we were to start from scratch, it would be pretty obvious - to me - \n> that run_as_owner=false would be the default, and the run_as_owner=true \n> would need to be turned on explicitly. I'm thinking about \"run_as_owner\" \n> as the better design and \"defaults to true\" as a backwards compatibility \n> thing.\n\nRight, if we treat that as a kind of \"backwards-compatible\" feature, \nhaving an reloption that is on by default makes sense.\n\nI converted the option to run_as_owner=true|false in the attached v7.\nIt now definitely seems like the right way to move forward and getting \nmore feedback.\n\nThanks,\nChristoph Heiss", "msg_date": "Tue, 15 Feb 2022 13:02:29 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 2022-02-15 at 13:02 +0100, Christoph Heiss wrote:\n\n> > > \n> I converted the option to run_as_owner=true|false in the attached v7.\n> It now definitely seems like the right way to move forward and getting \n> more feedback.\n\nI think we are straying from the target.\n\n\"run_as_owner\" seems wrong to me, because it is all about permission\nchecking and *not* about running. As we have established, the query\nis always executed by the caller.\n\nSo my preferred bikeshed colors would be \"permissions_owner\" or\n\"permissions_caller\".\n\nAbout the documentation:\n\n--- a/doc/src/sgml/ref/alter_view.sgml\n+++ b/doc/src/sgml/ref/alter_view.sgml\n@@ -156,11 +156,21 @@ ALTER VIEW [ IF EXISTS ] <replaceable class=\"parameter\">name</replaceable> RESET\n <listitem>\n <para>\n Changes the security-barrier property of the view. The value must\n- be Boolean value, such as <literal>true</literal>\n+ be a Boolean value, such as <literal>true</literal>\n or <literal>false</literal>.\n </para>\n </listitem>\n </varlistentry>\n+ <varlistentry>\n+ <term><literal>run_as_owner</literal> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Changes the user as which the subquery is run. Default is\n+ <literal>true</literal>. The value must be a Boolean value, such as\n+ <literal>true</literal> or <literal>false</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nCorrect would be\n\nIf set to <literal>true</literal> (which is the default value), permissions\non the underlying relations are checked as view owner, otherwise as the user\nexecuting the query.\n\n(I used \"relation\" to express that it doesn't hold for functions.)\n\n--- a/doc/src/sgml/ref/create_view.sgml\n+++ b/doc/src/sgml/ref/create_view.sgml\n@@ -265,13 +278,39 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello;\n </para>\n \n <para>\n- Access to tables referenced in the view is determined by permissions of\n- the view owner. In some cases, this can be used to provide secure but\n- restricted access to the underlying tables. However, not all views are\n- secure against tampering; see <xref linkend=\"rules-privileges\"/> for\n- details. Functions called in the view are treated the same as if they had\n- been called directly from the query using the view. Therefore the user of\n- a view must have permissions to call all functions used by the view.\n+ By default, access to tables and functions referenced in the view is\n+ determined by permissions of the view owner.\n\nNo, access to the functions is checked for the caller.\n\n+ [...] Therefore the user of a view must have permissions\n\nComma after \"therefore\".\n\n+ to call all functions used by the view. This also means that functions\n+ are executed as the invoking user, not the view owner.\n+ </para>\n+\n+ <para>\n+ However, when using chained views, the <literal>CURRENT_USER</literal> user\n+ will always stay the invoking user,\n\n\n\"However\" would introduce something that is different from what came before,\nwhich this doesn't seem to be.\n\nPerhaps \"In particular\" or \"moreover\".\n\n+ regardless of whether the query is run\n+ as the view owner (the default) or the invoking user (when\n+ <literal>run_as_owner</literal> is set to <literal>false</literal>)\n+ and the depth of the current invocation.\n+ </para>\n\nThe query is *always* run as the invoking user. Better:\n\nregardless of whether relation permissions are checked as the view owner or ...\n\n+ <para>\n+ Be aware that <literal>USAGE</literal> privileges on schemas are not checked\n+ when referencing the underlying base relations, even if they are part of a\n+ different schema.\n </para>\n\n\"referencing\" is a bit unclear.\nPerhaps \"when checking permissions on the underlying base relations\".\n\nOtherwise, this looks good!\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 14:55:49 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Laurenz Albe:\n>> I converted the option to run_as_owner=true|false in the attached v7.\n>> It now definitely seems like the right way to move forward and getting\n>> more feedback.\n> I think we are straying from the target.\n> \n> \"run_as_owner\" seems wrong to me, because it is all about permission\n> checking and*not* about running. As we have established, the query\n> is always executed by the caller.\n> \n> So my preferred bikeshed colors would be \"permissions_owner\" or\n> \"permissions_caller\".\n\nMy main point was the \"xxx_owner = true by default\" thing. Whether xxx \nis \"permissions\" or \"run_as\" doesn't change that. permissions_caller, \nhowever, would be a step backwards.\n\nI can see how permissions_owner is better than run_as_owner. The code \nuses checkAsUser, so check_as_owner would be an option, too. Although \nthat could easily be associated with WITH CHECK OPTION. Thinking about \nthat, the difference between LOCAL and CASCADED for CHECK OPTION pretty \nmuch sums up one of the confusing bits about the whole thing, too.\n\nMaybe \"local_permissions_owner = true | false\"? That would make it \ncrystal-clear, that this is only about the very first permissions check \nand not about any checks later in a chain of multiple views.\n\n\"local_permissions = owner | caller\" could also work - as long as we're \nnot using any of definer or invoker.\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Tue, 15 Feb 2022 16:07:56 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 2022-02-15 at 16:07 +0100, walther@technowledgy.de wrote:\n> Laurenz Albe:\n> > > I converted the option to run_as_owner=true|false in the attached v7.\n> > > It now definitely seems like the right way to move forward and getting\n> > > more feedback.\n> > I think we are straying from the target.\n> > \n> > \"run_as_owner\" seems wrong to me, because it is all about permission\n> > checking and*not*  about running.  As we have established, the query\n> > is always executed by the caller.\n> > \n> > So my preferred bikeshed colors would be \"permissions_owner\" or\n> > \"permissions_caller\".\n> \n> My main point was the \"xxx_owner = true by default\" thing. Whether xxx \n> is \"permissions\" or \"run_as\" doesn't change that. permissions_caller, \n> however, would be a step backwards.\n> \n> I can see how permissions_owner is better than run_as_owner. The code \n> uses checkAsUser, so check_as_owner would be an option, too. Although \n> that could easily be associated with WITH CHECK OPTION. Thinking about \n> that, the difference between LOCAL and CASCADED for CHECK OPTION pretty \n> much sums up one of the confusing bits about the whole thing, too.\n> \n> Maybe \"local_permissions_owner = true | false\"? That would make it \n> crystal-clear, that this is only about the very first permissions check \n> and not about any checks later in a chain of multiple views.\n> \n> \"local_permissions = owner | caller\" could also work - as long as we're \n> not using any of definer or invoker.\n\nI don't think that \"local\" will make this clearer.\nI'd be happy with \"check_as_owner\", except it is unclear *what* is checked.\n\"check_permissions_as_owner\" is ok with me, but a bit long.\n\nHow about \"check_permissions_owner\"?\n\nYours,\nLaurenz\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 16:25:40 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Laurenz Albe:\n> I'd be happy with \"check_as_owner\", except it is unclear *what* is checked.\n\nYeah, that could be associated with WITH CHECK OPTION, too, as in \"do \nthe CHECK OPTION stuff as the owner\".\n\n> \"check_permissions_as_owner\" is ok with me, but a bit long.\n\ncheck_permissions_as_owner is exactly what happens. The additional \"as\" \nshouldn't be a problem in length - but is much better to read. I \nwouldn't associate that with CHECK OPTION either. +1\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Tue, 15 Feb 2022 16:32:49 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 2022-02-15 at 16:32 +0100, walther@technowledgy.de wrote:\n> > \"check_permissions_as_owner\" is ok with me, but a bit long.\n> \n> check_permissions_as_owner is exactly what happens. The additional \"as\" \n> shouldn't be a problem in length - but is much better to read. I \n> wouldn't associate that with CHECK OPTION either. +1\n\nHere is a new version, with improved documentation and the option renamed\nto \"check_permissions_owner\". I just prefer the shorter form.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 18 Feb 2022 15:57:06 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Fri, 18 Feb 2022 at 14:57, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> Here is a new version, with improved documentation and the option renamed\n> to \"check_permissions_owner\". I just prefer the shorter form.\n>\n\nRe-reading this thread, I think I preferred the name\n\"security_invoker\". The main objection seemed to come from the\npotential confusion with SECURITY INVOKER/DEFINER functions, but I\nthink that's really a different thing. As long as the documentation\nfor the default behaviour is clear (which I think it was), then it\nshould be easy to explain how a security invoker view behaves\ndifferently. Also, there's value in using the same terminology as\nother databases, because many users will already be familiar with the\nfeature from those databases.\n\nSome other review comments:\n\n1). This new comment:\n\n+ <para>\n+ Be aware that <literal>USAGE</literal> privileges on schemas containing\n+ the underlying base relations are <emphasis>not</emphasis> checked.\n+ </para>\n\nis not entirely accurate. It's more accurate to say that a user\ncreating or replacing a view must have CREATE privileges on the schema\ncontaining the view and USAGE privileges on any schemas referred to in\nthe view query, whereas a user using the view only needs USAGE\nprivileges on the schema containing the view.\n\n(Note that, for the view creator, USAGE is required on any schema\nreferred to in the query -- e.g., schemas containing functions as well\nas base relations.)\n\n2). The patch is adding a new field to RangeTblEntry which seems to be\nunnecessary -- it's set, and copied around, but never read, so it\nshould just be removed.\n\n3). Looking at this change:\n\n- setRuleCheckAsUser((Node *) rule->actions, relation->rd_rel->relowner);\n- setRuleCheckAsUser(rule->qual, relation->rd_rel->relowner);\n+ if (!(relation->rd_rel->relkind == RELKIND_VIEW\n+ && !RelationSubqueryCheckPermsOwner(relation)))\n+ {\n+ setRuleCheckAsUser((Node *) rule->actions,\nrelation->rd_rel->relowner);\n+ setRuleCheckAsUser(rule->qual, relation->rd_rel->relowner);\n+ }\n\nI think it should call setRuleCheckAsUser() in all cases. It might be\ntrue that the rule fetched has checkAsUser set to InvalidOid\nthroughout its action and quals, but it seems unwise to rely on that\n-- better to code defensively and explicitly set it in all cases.\n\n4). In the same code block, I think the new behaviour should be\napplied to SELECT rules only. The view may have other non-SELECT rules\n(just as a table may have non-SELECT rules), created using CREATE\nRULE, but their actions are independent of the view definition.\nCurrently their permissions are checked as the view/table owner, and\nif anyone wanted to change that, it should be an option on the rule,\nnot the view (just as triggers can be made security definer or\ninvoker, depending on how the trigger function is defined).\n\n(Note: I'm not suggesting that anyone actually spend any time adding\nsuch an option to rules. Given all the pitfalls associated with rules,\nI think their use should be discouraged, and no development effort\nshould be expended enhancing them.)\n\n5). In the same function, the block of code that fetches rules and\ntriggers has been moved. I think it would be worth adding a comment to\nexplain why it's now important to extract the reloptions *before*\nfetching the relation's rules and triggers.\n\n6). The second set of tests added to rowsecurity.sql seem to have\nnothing to do with RLS, and probably belong in updatable_views.sql,\nand I think it would be worth adding a few more tests for things like\nviews on top of views.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 25 Feb 2022 18:22:21 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Thanks for reviewing!\n\nOn 2/25/22 19:22, Dean Rasheed wrote:\n> Re-reading this thread, I think I preferred the name\n> \"security_invoker\". The main objection seemed to come from the\n> potential confusion with SECURITY INVOKER/DEFINER functions, but I\n> think that's really a different thing. As long as the documentation\n> for the default behaviour is clear (which I think it was), then it\n> should be easy to explain how a security invoker view behaves\n> differently. Also, there's value in using the same terminology as\n> other databases, because many users will already be familiar with the\n> feature from those databases.\n\nThat is also the main reason I preferred naming it \"security_invoker\" - \nit is consistent with other databases and eases transition from such \nsystems.\n\nI kept \"check_permissions_owner\" for now. Constantly changing it around \nwith each iteration doesn't really bring any value IMHO, I'd rather have \na final consensus on how to name the option and *then* change it for good.\n\n> \n> Some other review comments:\n> \n> 1). This new comment:\n> \n> + <para>\n> + Be aware that <literal>USAGE</literal> privileges on schemas containing\n> + the underlying base relations are <emphasis>not</emphasis> checked.\n> + </para>\n> \n> is not entirely accurate. It's more accurate to say that a user\n> creating or replacing a view must have CREATE privileges on the schema\n> containing the view and USAGE privileges on any schemas referred to in\n> the view query, whereas a user using the view only needs USAGE\n> privileges on the schema containing the view.\n> \n> (Note that, for the view creator, USAGE is required on any schema\n> referred to in the query -- e.g., schemas containing functions as well\n> as base relations.)\n\nImproved in the attached v9.\n\n> \n> 2). The patch is adding a new field to RangeTblEntry which seems to be\n> unnecessary -- it's set, and copied around, but never read, so it\n> should just be removed.\n\nI removed that field in v9 since it is indeed completely unused. I \ninitially added it to be consistent with the \"security_barrier\" \nimplementation and than somewhat forgot about it.\n\n> \n> 3). Looking at this change:\n> \n> [..]\n> \n> I think it should call setRuleCheckAsUser() in all cases. It might be\n> true that the rule fetched has checkAsUser set to InvalidOid\n> throughout its action and quals, but it seems unwise to rely on that\n> -- better to code defensively and explicitly set it in all cases.\n\nIt probably doesn't really matter, but I agree that coding defensively \nis always a good thing.\nChanged that in v9 to call setRuleCheckAsUser() either with ->relowner \nor InvalidOid.\n\n> \n> 4). In the same code block, I think the new behaviour should be\n> applied to SELECT rules only. The view may have other non-SELECT rules\n> (just as a table may have non-SELECT rules), created using CREATE\n> RULE, but their actions are independent of the view definition.\n> Currently their permissions are checked as the view/table owner, and\n> if anyone wanted to change that, it should be an option on the rule,\n> not the view (just as triggers can be made security definer or\n> invoker, depending on how the trigger function is defined).\n> \n\nGood catch, I added a additional check for rule->event and a test for \nthat in v9.\n[ I also had to add a missing DROP statement to some previous test, just \na heads up. ]\n\nIt makes sense to mimic the behavior of triggers and further, \nuser-created rules otherwise might behave differently for tables and \nviews, depending on the view definition.\n[ But I'm not _that_ familiar with CREATE RULE, FWIW. ]\n\n> \n> 5). In the same function, the block of code that fetches rules and\n> triggers has been moved. I think it would be worth adding a comment to\n> explain why it's now important to extract the reloptions *before*\n> fetching the relation's rules and triggers.\n\nAdded a small comment explaining that in v9.\n\n> \n> 6). The second set of tests added to rowsecurity.sql seem to have\n> nothing to do with RLS, and probably belong in updatable_views.sql,\n> and I think it would be worth adding a few more tests for things like\n> views on top of views.\n\nSeems reasonable to move them into updatable_views.sql, done that for \nv9. Further I added two (simple) tests for chained views as you \nmentioned, hope they reflect what you had in mind.\n\nThanks,\nChristoph", "msg_date": "Tue, 1 Mar 2022 17:40:45 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 1 Mar 2022 at 16:40, Christoph Heiss\n<christoph.heiss@cybertec.at> wrote:\n>\n> That is also the main reason I preferred naming it \"security_invoker\" -\n> it is consistent with other databases and eases transition from such\n> systems.\n>\n> I kept \"check_permissions_owner\" for now. Constantly changing it around\n> with each iteration doesn't really bring any value IMHO, I'd rather have\n> a final consensus on how to name the option and *then* change it for good.\n>\n\nYes indeed, it's annoying to keep changing the name between patch\nversions, so let's try to get a consensus now.\n\nFor my part, I find myself more and more convinced that\n\"security_invoker\" is the right name, because it matches the\nterminology used for functions, and in other database systems. I think\nthe parallels between security invoker functions and security invoker\nviews are quite strong.\n\nThere are a couple of additional considerations that lend weight to\nthat choice of name, though not uniquely to it:\n\n1). There is a slight advantage to having an option that defaults to\nfalse/off, like the existing \"security_barrier\" option -- it allows a\nshorthand to turn the option on, because the system automatically\nturns \"WITH (security_barrier)\" into \"WITH (security_barrier=true)\".\n\n2). Grammatically, a name like this works better, because it serves\nboth as the name of the boolean option, and as an adjective that can\nbe used to describe and name the feature -- as in \"security barrier\nviews are cool\" -- making it easier to talk about the feature.\n\n\"check_permissions_owner=false\" doesn't work as well in either regard,\nand just feels much more clumsy.\n\nWhen we come to write the release notes for this feature, saying that\nthis version of PG now supports security invoker views is going to\nmean a lot more to people who already use that feature in other\ndatabases.\n\nWhat are other people's opinions?\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:10:57 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "Dean Rasheed:\n>> That is also the main reason I preferred naming it \"security_invoker\" -\n>> it is consistent with other databases and eases transition from such\n>> systems.\n> [...]\n>\n> For my part, I find myself more and more convinced that\n> \"security_invoker\" is the right name, because it matches the\n> terminology used for functions, and in other database systems. I think\n> the parallels between security invoker functions and security invoker\n> views are quite strong.\n>\n> [...]\n>\n> When we come to write the release notes for this feature, saying that\n> this version of PG now supports security invoker views is going to\n> mean a lot more to people who already use that feature in other\n> databases.\n>\n> What are other people's opinions?\n\nAll those points in favor of security_invoker are very good indeed. The \nmain objection was not the term invoker, though, but the implicit \nassociation it creates as in \"security_invoker=false behaves like \nsecurity definer\". But this is clearly wrong, the \"security definer\" \nsemantics as used for functions or in other databases just don't apply \nas the default in PG.\n\nI think renaming the reloption was a shortcut to avoid that association, \nwhile the best way to deal with that would be explicit documentation. \nMeanwhile, the patch has added a mention about CURRENT_USER, so that's a \nfirst step. Maybe an explicit mention that security_invoker=false, is \nNOT the same as \"security definer\" and explaining why would already be \nenough?\n\nBest\n\nWolfgang\n\n\n\n", "msg_date": "Wed, 2 Mar 2022 11:46:40 +0100", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Wed, 2022-03-02 at 10:10 +0000, Dean Rasheed wrote:\n> > I kept \"check_permissions_owner\" for now. Constantly changing it around\n> > with each iteration doesn't really bring any value IMHO, I'd rather have\n> > a final consensus on how to name the option and *then* change it for good.\n> \n> Yes indeed, it's annoying to keep changing the name between patch\n> versions, so let's try to get a consensus now.\n> \n> For my part, I find myself more and more convinced that\n> \"security_invoker\" is the right name [...]\n> \n> What are other people's opinions?\n\nI am fine with \"security_invoker\". If there are other databases that use the\nsame term for the same thing, that is a strong argument.\n\nI also agree that having \"off\" for the default setting is nicer.\n\nMy main worry is that other people misunderstand it in the same way that\nWalter did, namely that this behaves just like security invoker functions.\nBut if the behavior is well documented, I think that is ok.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 02 Mar 2022 20:07:12 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On 3/2/22 11:10, Dean Rasheed wrote:\n> For my part, I find myself more and more convinced that\n> \"security_invoker\" is the right name, because it matches the\n> terminology used for functions, and in other database systems. I think\n> the parallels between security invoker functions and security invoker\n> views are quite strong.\n> \n> [..]\n> \n> What are other people's opinions?\n> \n\nSince there don't seem to be any more objections to \"security_invoker\" I \nattached v10 renaming it again.\n\nI've tried to better clarify the whole invoker vs. definer thing in the \nCREATE VIEW documentation by explicitly mentioning that \n\"security_invoker=false\" is _not_ the same as \"security definer\", based \non the earlier discussions.\n\nThis should hopefully avoid any implicit associations.\n\nThanks,\nChristoph", "msg_date": "Tue, 8 Mar 2022 18:17:36 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Tue, 2022-03-08 at 18:17 +0100, Christoph Heiss wrote:\n> Since there don't seem to be any more objections to \"security_invoker\" I \n> attached v10 renaming it again.\n> \n> I've tried to better clarify the whole invoker vs. definer thing in the \n> CREATE VIEW documentation by explicitly mentioning that \n> \"security_invoker=false\" is _not_ the same as \"security definer\", based \n> on the earlier discussions.\n> \n> This should hopefully avoid any implicit associations.\n\nI have only some minor comments:\n\n> --- a/doc/src/sgml/ref/create_view.sgml\n> +++ b/doc/src/sgml/ref/create_view.sgml\n> @@ -387,10 +430,17 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello;\n> <para>\n> Note that the user performing the insert, update or delete on the view\n> must have the corresponding insert, update or delete privilege on the\n> - view. In addition the view's owner must have the relevant privileges on\n> - the underlying base relations, but the user performing the update does\n> - not need any permissions on the underlying base relations (see\n> - <xref linkend=\"rules-privileges\"/>).\n> + view.\n> + </para>\n> +\n> + <para>\n> + Additionally, by default the view's owner must have the relevant privileges\n> + on the underlying base relations, but the user performing the update does\n> + not need any permissions on the underlying base relations. (see\n> + <xref linkend=\"rules-privileges\"/>) If the view has the\n> + <literal>security_invoker</literal> property is set to\n> + <literal>true</literal>, the invoking user will need to have the relevant\n> + privileges rather than the view owner.\n> </para>\n> </refsect2>\n> </refsect1>\n\nThis paragraph contains a couple of grammatical errors.\nHow about\n\n <para>\n Note that the user performing the insert, update or delete on the view\n must have the corresponding insert, update or delete privilege on the\n view. Unless <literal>security_invoker</literal> is set to\n <literal>true</literal>, the view's owner must additionally have the\n relevant privileges on the underlying base relations, but the user\n performing the update does not need any permissions on the underlying\n base relations (see <xref linkend=\"rules-privileges\"/>).\n If <literal>security_invoker</literal> is set to <literal>true</literal>,\n it is the invoking user rather than the view owner that must have the\n relevant privileges on the underlying base relations.\n </para>\n\nAlso, this:\n\n> --- a/src/backend/utils/cache/relcache.c\n> +++ b/src/backend/utils/cache/relcache.c\n> @@ -838,8 +846,18 @@ RelationBuildRuleLock(Relation relation)\n> * the rule tree during load is relatively cheap (compared to\n> * constructing it in the first place), so we do it here.\n> */\n> - setRuleCheckAsUser((Node *) rule->actions, relation->rd_rel->relowner);\n> - setRuleCheckAsUser(rule->qual, relation->rd_rel->relowner);\n> + if (rule->event == CMD_SELECT\n> + && relation->rd_rel->relkind == RELKIND_VIEW\n> + && RelationHasSecurityInvoker(relation))\n> + {\n> + setRuleCheckAsUser((Node *) rule->actions, InvalidOid);\n> + setRuleCheckAsUser(rule->qual, InvalidOid);\n> + }\n> + else\n> + {\n> + setRuleCheckAsUser((Node *) rule->actions, relation->rd_rel->relowner);\n> + setRuleCheckAsUser(rule->qual, relation->rd_rel->relowner);\n> + }\n\ncould be written like this (introducing a new variable):\n\n if (rule->event == CMD_SELECT\n && relation->rd_rel->relkind == RELKIND_VIEW\n && RelationHasSecurityInvoker(relation))\n user_for_check = InvalidOid;\n else\n user_for_check = relation->rd_rel->relowner;\n\n setRuleCheckAsUser((Node *) rule->actions, user_for_check);\n setRuleCheckAsUser(rule->qual, user_for_check);\n\nThis might be easier to read.\n\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 09 Mar 2022 16:06:59 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On 3/9/22 16:06, Laurenz Albe wrote:\n> This paragraph contains a couple of grammatical errors.\n> How about\n> \n> <para>\n> Note that the user performing the insert, update or delete on the view\n> must have the corresponding insert, update or delete privilege on the\n> view. Unless <literal>security_invoker</literal> is set to\n> <literal>true</literal>, the view's owner must additionally have the\n> relevant privileges on the underlying base relations, but the user\n> performing the update does not need any permissions on the underlying\n> base relations (see <xref linkend=\"rules-privileges\"/>).\n> If <literal>security_invoker</literal> is set to <literal>true</literal>,\n> it is the invoking user rather than the view owner that must have the\n> relevant privileges on the underlying base relations.\n> </para>\n\nReplaced the two paragraphs with your suggestion, it is indeed easier to \nread.\n\n> \n> Also, this:\n> \n> [..]\n> \n> could be written like this (introducing a new variable):\n> \n> if (rule->event == CMD_SELECT\n> && relation->rd_rel->relkind == RELKIND_VIEW\n> && RelationHasSecurityInvoker(relation))\n> user_for_check = InvalidOid;\n> else\n> user_for_check = relation->rd_rel->relowner;\n> \n> setRuleCheckAsUser((Node *) rule->actions, user_for_check);\n> setRuleCheckAsUser(rule->qual, user_for_check);\n> \n> This might be easier to read.\n\nMakes sense, I've changed that. This also seems to be more in line with \nall the other code.\nWhile at it I also split the comment alongside it to match, hopefully \nthat makes sense.\n\nThanks,\nChristoph Heiss", "msg_date": "Mon, 14 Mar 2022 13:40:47 +0100", "msg_from": "Christoph Heiss <christoph.heiss@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Mon, 2022-03-14 at 13:40 +0100, Christoph Heiss wrote:\n> On 3/9/22 16:06, Laurenz Albe wrote:\n> > This paragraph contains a couple of grammatical errors.\n>\n> Replaced the two paragraphs with your suggestion, it is indeed easier to \n> read.\n> \n> > Also, this:\n> > could be written like this (introducing a new variable):\n> > \n> >    if (rule->event == CMD_SELECT\n> >        && relation->rd_rel->relkind == RELKIND_VIEW\n> >        && RelationHasSecurityInvoker(relation))\n> >        user_for_check = InvalidOid;\n> >    else\n> >        user_for_check = relation->rd_rel->relowner;\n> > \n> >    setRuleCheckAsUser((Node *) rule->actions, user_for_check);\n> >    setRuleCheckAsUser(rule->qual, user_for_check);\n> > \n> > This might be easier to read.\n> \n> Makes sense, I've changed that. This also seems to be more in line with \n> all the other code.\n> While at it I also split the comment alongside it to match, hopefully \n> that makes sense.\n\nThe patch is fine from my point of view.\n\nIt passes \"make check-world\".\n\nI'll mark it as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 14 Mar 2022 17:16:33 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Mon, 14 Mar 2022 at 16:16, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> The patch is fine from my point of view.\n>\n> It passes \"make check-world\".\n>\n> I'll mark it as \"ready for committer\".\n>\n\nCool, thanks. I think this will make a useful addition to PG15.\n\nI have been hacking on it a bit, and attached is an updated version.\nAside from some general copy editing, the most notable changes are:\n\nIn the updatable_views tests, I have moved the new tests to\nimmediately after the existing permission checking tests, which seems\nlike a more logical place to put them, and modified them to use the\nsame style as those existing tests. IMO, this test style makes the\ntask of writing tests simpler, since the expected output is a little\nmore obvious.\n\nSimilarly in the rowsecurity tests, I have moved the new tests to\nimmediately after the existing tests for RLS policies on tables\naccessed via views, and added a few new tests in the same style,\nincluding verifying permission checks on relations in subqueries in\nRLS policies, when the table is accessed via a view.\n\nI wasn't happy with the overall level of test coverage for this new\nfeature, so I have expanded on them quite a bit. This includes tests\nfor a bug in rewriteTargetView() -- it wasn't consistently handling\nthe case of an update involving an ordinary view on top of a security\ninvoker view.\n\nI have added explicit documentation for the fact that a security\ninvoker view always does permission checks as the current user, even\nif it is accessed from a non-security invoker view, since that was the\ncause of some discussion on this thread.\n\nI've also added some more detailed documentation describing how all\nthis affects RLS, since that's likely to be a common use case.\n\nI've done a fairly extensive doc search, and I *think* I've identified\nall the other places that needed updating.\n\nOne additional thing that had been missed was that the LOCK command\ncan be used to lock views, which includes locking all underlying base\nrelations, after checking permissions as the view owner. The\nlogical/consistent thing to do for security invoker views is to do the\npermission checks as the invoking user, so I've done that.\n\nBarring any other comments or objections, I'll push this in a couple\nof days or so, after a bit more proof-reading.\n\nRegards,\nDean", "msg_date": "Sat, 19 Mar 2022 01:10:02 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Sat, 2022-03-19 at 01:10 +0000, Dean Rasheed wrote:\n> I have been hacking on it a bit, and attached is an updated version.\n> Aside from some general copy editing, the most notable changes are:\n> [...]\n\nThanks for your diligent work on this, and the patch looks good to me.\nIt is good that you found the oversight in LOCK - I wasn't even\naware that views could be locked.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 21 Mar 2022 10:47:43 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Mon, 2022-03-21 at 18:09 +0800, Japin Li wrote:\n> After apply the patch, I found pg_checksums.c also has the similar code.\n> \n> In progress_report(), I'm not sure we can do this replace for this code.\n> \n>     snprintf(total_size_str, sizeof(total_size_str), INT64_FORMAT,\n>              total_size / (1024 * 1024));\n>     snprintf(current_size_str, sizeof(current_size_str), INT64_FORMAT,\n>              current_size / (1024 * 1024));\n> \n>     fprintf(stderr, _(\"%*s/%s MB (%d%%) computed\"),\n>             (int) strlen(current_size_str), current_size_str, total_size_str,\n>             percent);\n\nI think you replied to the wrong thread...\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 21 Mar 2022 13:40:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "\nOn Mon, 21 Mar 2022 at 20:40, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Mon, 2022-03-21 at 18:09 +0800, Japin Li wrote:\n>> After apply the patch, I found pg_checksums.c also has the similar code.\n>>\n>> In progress_report(), I'm not sure we can do this replace for this code.\n>>\n>> snprintf(total_size_str, sizeof(total_size_str), INT64_FORMAT,\n>> total_size / (1024 * 1024));\n>> snprintf(current_size_str, sizeof(current_size_str), INT64_FORMAT,\n>> current_size / (1024 * 1024));\n>>\n>> fprintf(stderr, _(\"%*s/%s MB (%d%%) computed\"),\n>> (int) strlen(current_size_str), current_size_str, total_size_str,\n>> percent);\n>\n> I think you replied to the wrong thread...\n>\n\n\nI'm sorry! There is a problem with my email client and I didn't notice the\nsubject of the reply email.\n\nAgain, sorry for the noise!\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 21 Mar 2022 21:26:54 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" }, { "msg_contents": "On Mon, 21 Mar 2022 at 09:47, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> Thanks for your diligent work on this, and the patch looks good to me.\n\nThanks for looking again. Pushed.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 22 Mar 2022 11:31:25 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add reloption for views to enable RLS" } ]
[ { "msg_contents": "Hi,\n\nwhile working on logical decoding of sequences, I ran into an issue with \nnextval() in a transaction that rolls back, described in [1]. But after \nthinking about it a bit more (and chatting with Petr Jelinek), I think \nthis issue affects physical sync replication too.\n\nImagine you have a primary <-> sync_replica cluster, and you do this:\n\n CREATE SEQUENCE s;\n\n -- shutdown the sync replica\n\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,50);\n ROLLBACK;\n\n BEGIN;\n SELECT nextval('s');\n COMMIT;\n\nThe natural expectation would be the COMMIT gets stuck, waiting for the \nsync replica (which is not running), right? But it does not.\n\nThe problem is exactly the same as in [1] - the aborted transaction \ngenerated WAL, but RecordTransactionAbort() ignores that and does not \nupdate LogwrtResult.Write, with the reasoning that aborted transactions \ndo not matter. But sequences violate that, because we only write WAL \nonce every 32 increments, so the following nextval() gets \"committed\" \nwithout waiting for the replica (because it did not produce WAL).\n\nI'm not sure this is a clear data corruption bug, but it surely walks \nand quacks like one. My proposal is to fix this by tracking the lsn of \nthe last LSN for a sequence increment, and then check that LSN in \nRecordTransactionCommit() before calling XLogFlush().\n\n\nregards\n\n\n[1] \nhttps://www.postgresql.org/message-id/ae3cab67-c31e-b527-dd73-08f196999ad4%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Dec 2021 02:53:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "sequences vs. synchronous replication" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The problem is exactly the same as in [1] - the aborted transaction \n> generated WAL, but RecordTransactionAbort() ignores that and does not \n> update LogwrtResult.Write, with the reasoning that aborted transactions \n> do not matter. But sequences violate that, because we only write WAL \n> once every 32 increments, so the following nextval() gets \"committed\" \n> without waiting for the replica (because it did not produce WAL).\n\nUgh.\n\n> I'm not sure this is a clear data corruption bug, but it surely walks \n> and quacks like one. My proposal is to fix this by tracking the lsn of \n> the last LSN for a sequence increment, and then check that LSN in \n> RecordTransactionCommit() before calling XLogFlush().\n\n(1) Does that work if the aborted increment was in a different\nsession? I think it is okay but I'm tired enough to not be sure.\n\n(2) I'm starting to wonder if we should rethink the sequence logging\nmechanism altogether. It was cool when designed, but it seems\nreally problematic when you start thinking about replication\nbehaviors. Perhaps if wal_level > minimal, we don't do things\nthe same way?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Dec 2021 23:52:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/18/21 05:52, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> The problem is exactly the same as in [1] - the aborted transaction\n>> generated WAL, but RecordTransactionAbort() ignores that and does not\n>> update LogwrtResult.Write, with the reasoning that aborted transactions\n>> do not matter. But sequences violate that, because we only write WAL\n>> once every 32 increments, so the following nextval() gets \"committed\"\n>> without waiting for the replica (because it did not produce WAL).\n> \n> Ugh.\n> \n>> I'm not sure this is a clear data corruption bug, but it surely walks\n>> and quacks like one. My proposal is to fix this by tracking the lsn of\n>> the last LSN for a sequence increment, and then check that LSN in\n>> RecordTransactionCommit() before calling XLogFlush().\n> \n> (1) Does that work if the aborted increment was in a different\n> session? I think it is okay but I'm tired enough to not be sure.\n> \n\nGood point - it doesn't :-( At least not by simply storing LSN in a \nglobal variable or something like that.\n\nThe second backend needs to know the LSN of the last WAL-logged sequence \nincrement, but only the first backend knows that. So we'd need to share \nthat between backends somehow. I doubt we want to track LSN for every \nindividual sequence (because for clusters with many dbs / sequences that \nmay be a lot).\n\nPerhaps we could track just a fixed number o LSN values in shared memory \n(say, 1024), and update/read just the element determined by hash(oid). \nThat is, the backend WAL-logging sequence with given oid would set the \ncurrent LSN to array[hash(oid) % 1024], and backend doing nextval() \nwould simply remember the LSN in that slot. Yes, if there are conflicts \nthat'll flush more than needed.\n\nAlternatively we could simply use the current insert LSN, but that's \ngoing to flush more stuff than needed all the time.\n\n\n> (2) I'm starting to wonder if we should rethink the sequence logging\n> mechanism altogether. It was cool when designed, but it seems\n> really problematic when you start thinking about replication\n> behaviors. Perhaps if wal_level > minimal, we don't do things\n> the same way?\n\nMaybe, but I have no idea how should the reworked WAL logging work. Any \nbatching seems to have this issue, and loging individual increments is \nlikely going to be slower.\n\nOf course, reworking how sequences are WAL-logged may invalidate the \n\"sequence decoding\" patch I've been working on :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Dec 2021 07:00:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/18/21 07:00, Tomas Vondra wrote:\n> \n> \n> On 12/18/21 05:52, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> The problem is exactly the same as in [1] - the aborted transaction\n>>> generated WAL, but RecordTransactionAbort() ignores that and does not\n>>> update LogwrtResult.Write, with the reasoning that aborted transactions\n>>> do not matter. But sequences violate that, because we only write WAL\n>>> once every 32 increments, so the following nextval() gets \"committed\"\n>>> without waiting for the replica (because it did not produce WAL).\n>>\n>> Ugh.\n>>\n>>> I'm not sure this is a clear data corruption bug, but it surely walks\n>>> and quacks like one. My proposal is to fix this by tracking the lsn of\n>>> the last LSN for a sequence increment, and then check that LSN in\n>>> RecordTransactionCommit() before calling XLogFlush().\n>>\n>> (1) Does that work if the aborted increment was in a different\n>> session?  I think it is okay but I'm tired enough to not be sure.\n>>\n> \n> Good point - it doesn't :-( At least not by simply storing LSN in a \n> global variable or something like that.\n> \n> The second backend needs to know the LSN of the last WAL-logged sequence \n> increment, but only the first backend knows that. So we'd need to share \n> that between backends somehow. I doubt we want to track LSN for every \n> individual sequence (because for clusters with many dbs / sequences that \n> may be a lot).\n> \n> Perhaps we could track just a fixed number o LSN values in shared memory \n> (say, 1024), and update/read just the element determined by hash(oid). \n> That is, the backend WAL-logging sequence with given oid would set the \n> current LSN to array[hash(oid) % 1024], and backend doing nextval() \n> would simply remember the LSN in that slot. Yes, if there are conflicts \n> that'll flush more than needed.\n> \n\nHere's a PoC demonstrating this idea. I'm not convinced it's the right \nway to deal with this - it surely seems more like a duct tape fix than a \nclean solution. But it does the trick.\n\nI wonder if storing this in shmem is good enough - we lose the LSN info \non restart, but the checkpoint should trigger FPI which makes it OK.\n\nA bigger question is whether sequences are the only thing affected by \nthis. If you look at RecordTransactionCommit() then we skip flush/wait \nin two cases:\n\n1) !wrote_xlog - if the xact did not produce WAL\n\n2) !markXidCommitted - if the xact does not have a valid XID\n\nBoth apply to sequences, and the PoC patch tweaks them. But maybe there \nare other places where we don't generate WAL and/or assign XID in some \ncases, to save time?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 18 Dec 2021 21:45:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Here's a PoC demonstrating this idea. I'm not convinced it's the right \n> way to deal with this - it surely seems more like a duct tape fix than a \n> clean solution. But it does the trick.\n\nI was imagining something a whole lot simpler, like \"don't try to\ncache unused sequence numbers when wal_level > minimal\". We've\naccepted worse performance hits in that operating mode, and it'd\nfix a number of user complaints we've seen about weird sequence\nbehavior on standbys.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:27:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/18/21 22:27, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Here's a PoC demonstrating this idea. I'm not convinced it's the right\n>> way to deal with this - it surely seems more like a duct tape fix than a\n>> clean solution. But it does the trick.\n> \n> I was imagining something a whole lot simpler, like \"don't try to\n> cache unused sequence numbers when wal_level > minimal\". We've\n> accepted worse performance hits in that operating mode, and it'd\n> fix a number of user complaints we've seen about weird sequence\n> behavior on standbys.\n> \n\nWhat do you mean by \"not caching unused sequence numbers\"? Reducing \nSEQ_LOG_VALS to 1, i.e. WAL-logging every sequence increment?\n\nThat'd work, but I wonder how significant the impact will be. It'd bet \nit hurts the patch adding logical decoding of sequences quite a bit.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Dec 2021 22:48:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> What do you mean by \"not caching unused sequence numbers\"? Reducing \n> SEQ_LOG_VALS to 1, i.e. WAL-logging every sequence increment?\n\nRight.\n\n> That'd work, but I wonder how significant the impact will be.\n\nAs I said, we've accepted worse in order to have stable replication\nbehavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:51:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On Sat, Dec 18, 2021 at 7:24 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> while working on logical decoding of sequences, I ran into an issue with\n> nextval() in a transaction that rolls back, described in [1]. But after\n> thinking about it a bit more (and chatting with Petr Jelinek), I think\n> this issue affects physical sync replication too.\n>\n> Imagine you have a primary <-> sync_replica cluster, and you do this:\n>\n> CREATE SEQUENCE s;\n>\n> -- shutdown the sync replica\n>\n> BEGIN;\n> SELECT nextval('s') FROM generate_series(1,50);\n> ROLLBACK;\n>\n> BEGIN;\n> SELECT nextval('s');\n> COMMIT;\n>\n> The natural expectation would be the COMMIT gets stuck, waiting for the\n> sync replica (which is not running), right? But it does not.\n>\n\nHow about if we always WAL log the first sequence change in a transaction?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 19 Dec 2021 08:33:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 18.12.21 22:48, Tomas Vondra wrote:\n> What do you mean by \"not caching unused sequence numbers\"? Reducing \n> SEQ_LOG_VALS to 1, i.e. WAL-logging every sequence increment?\n> \n> That'd work, but I wonder how significant the impact will be. It'd bet \n> it hurts the patch adding logical decoding of sequences quite a bit.\n\nIt might be worth testing. This behavior is ancient and has never \nreally been scrutinized since it was added.\n\n\n", "msg_date": "Mon, 20 Dec 2021 15:31:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/20/21 15:31, Peter Eisentraut wrote:\n> On 18.12.21 22:48, Tomas Vondra wrote:\n>> What do you mean by \"not caching unused sequence numbers\"? Reducing \n>> SEQ_LOG_VALS to 1, i.e. WAL-logging every sequence increment?\n>>\n>> That'd work, but I wonder how significant the impact will be. It'd bet \n>> it hurts the patch adding logical decoding of sequences quite a bit.\n> \n> It might be worth testing.  This behavior is ancient and has never \n> really been scrutinized since it was added.\n> \n\nOK, I'll do some testing to measure the overhead, and I'll see how much \nit affects the sequence decoding patch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Dec 2021 17:40:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/20/21 17:40, Tomas Vondra wrote:\n> On 12/20/21 15:31, Peter Eisentraut wrote:\n>> On 18.12.21 22:48, Tomas Vondra wrote:\n>>> What do you mean by \"not caching unused sequence numbers\"? Reducing \n>>> SEQ_LOG_VALS to 1, i.e. WAL-logging every sequence increment?\n>>>\n>>> That'd work, but I wonder how significant the impact will be. It'd \n>>> bet it hurts the patch adding logical decoding of sequences quite a bit.\n>>\n>> It might be worth testing.  This behavior is ancient and has never \n>> really been scrutinized since it was added.\n>>\n> \n> OK, I'll do some testing to measure the overhead, and I'll see how much \n> it affects the sequence decoding patch.\n> \n\nOK, I did a quick test with two very simple benchmarks - simple select \nfrom a sequence, and 'pgbench -N' on scale 1. Benchmark was on current \nmaster, patched means SEQ_LOG_VALS was set to 1.\n\nAverage of 10 runs, each 30 seconds long, look like this:\n\n1) select nextval('s');\n\n clients 1 4\n ------------------------------\n master 39497 123137\n patched 6813 18326\n ------------------------------\n diff -83% -86%\n\n2) pgbench -N\n\n clients 1 4\n ------------------------------\n master 2935 9156\n patched 2937 9100\n ------------------------------\n diff 0% 0%\n\n\nClearly the extreme case (1) is hit pretty bad, while the much mure \nlikely workload (2) is almost unaffected.\n\n\nI'm not sure what conclusion to make from this, but assuming almost no \none does just nextval calls, it should be acceptable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 21 Dec 2021 01:53:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> OK, I did a quick test with two very simple benchmarks - simple select \n> from a sequence, and 'pgbench -N' on scale 1. Benchmark was on current \n> master, patched means SEQ_LOG_VALS was set to 1.\n\nBut ... pgbench -N doesn't use sequences at all, does it?\n\nProbably inserts into a table with a serial column would constitute a\nplausible real-world case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 20:01:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/21/21 02:01, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> OK, I did a quick test with two very simple benchmarks - simple select\n>> from a sequence, and 'pgbench -N' on scale 1. Benchmark was on current\n>> master, patched means SEQ_LOG_VALS was set to 1.\n> \n> But ... pgbench -N doesn't use sequences at all, does it?\n> \n> Probably inserts into a table with a serial column would constitute a\n> plausible real-world case.\n> \n\nD'oh! For some reason I thought pgbench has a sequence on the history \ntable, but clearly I was mistaken. There's another thinko, because after \ninspecting pg_waldump output I realized \"SEQ_LOG_VALS 1\" actually logs \nonly every 2nd increment. So it should be \"SEQ_LOG_VALS 0\".\n\nSo I repeated the test fixing SEQ_LOG_VALS, and doing the pgbench with a \ntable like this:\n\n create table test (a serial, b int);\n\nand a script doing\n\n insert into test (b) values (1);\n\nThe results look like this:\n\n1) select nextval('s');\n\n clients 1 4\n ------------------------------\n master 39533 124998\n patched 3748 9114\n ------------------------------\n diff -91% -93%\n\n\n2) insert into test (b) values (1);\n\n clients 1 4\n ------------------------------\n master 3718 9188\n patched 3698 9209\n ------------------------------\n diff 0% 0%\n\nSo the nextval() results are a bit worse, due to not caching 1/2 the \nnextval calls. The -90% is roughly expected, due to generating about 32x \nmore WAL (and having to wait for commit).\n\nBut results for the more realistic insert workload are about the same as \nbefore (i.e. no measurable difference). Also kinda expected, because \nthose transactions have to wait for WAL anyway.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 21 Dec 2021 03:49:33 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/19/21 04:03, Amit Kapila wrote:\n> On Sat, Dec 18, 2021 at 7:24 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> while working on logical decoding of sequences, I ran into an issue with\n>> nextval() in a transaction that rolls back, described in [1]. But after\n>> thinking about it a bit more (and chatting with Petr Jelinek), I think\n>> this issue affects physical sync replication too.\n>>\n>> Imagine you have a primary <-> sync_replica cluster, and you do this:\n>>\n>> CREATE SEQUENCE s;\n>>\n>> -- shutdown the sync replica\n>>\n>> BEGIN;\n>> SELECT nextval('s') FROM generate_series(1,50);\n>> ROLLBACK;\n>>\n>> BEGIN;\n>> SELECT nextval('s');\n>> COMMIT;\n>>\n>> The natural expectation would be the COMMIT gets stuck, waiting for the\n>> sync replica (which is not running), right? But it does not.\n>>\n> \n> How about if we always WAL log the first sequence change in a transaction?\n> \n\nI've been thinking about doing something like this, but I think it would \nnot have any significant advantages compared to using \"SEQ_LOG_VALS 0\". \nIt would still have the same performance hit for plain nextval() calls, \nand there's no measurable impact on simple workloads that already write \nWAL in transactions even with SEQ_LOG_VALS 0.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Dec 2021 02:57:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 2021/12/22 10:57, Tomas Vondra wrote:\n> \n> \n> On 12/19/21 04:03, Amit Kapila wrote:\n>> On Sat, Dec 18, 2021 at 7:24 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> while working on logical decoding of sequences, I ran into an issue with\n>>> nextval() in a transaction that rolls back, described in [1]. But after\n>>> thinking about it a bit more (and chatting with Petr Jelinek), I think\n>>> this issue affects physical sync replication too.\n>>>\n>>> Imagine you have a primary <-> sync_replica cluster, and you do this:\n>>>\n>>>     CREATE SEQUENCE s;\n>>>\n>>>     -- shutdown the sync replica\n>>>\n>>>     BEGIN;\n>>>     SELECT nextval('s') FROM generate_series(1,50);\n>>>     ROLLBACK;\n>>>\n>>>     BEGIN;\n>>>     SELECT nextval('s');\n>>>     COMMIT;\n>>>\n>>> The natural expectation would be the COMMIT gets stuck, waiting for the\n>>> sync replica (which is not running), right? But it does not.\n>>>\n>>\n>> How about if we always WAL log the first sequence change in a transaction?\n>>\n> \n> I've been thinking about doing something like this, but I think it would not have any significant advantages compared to using \"SEQ_LOG_VALS 0\". It would still have the same performance hit for plain nextval() calls, and there's no measurable impact on simple workloads that already write WAL in transactions even with SEQ_LOG_VALS 0.\n\nJust idea; if wal_level > minimal, how about making nextval_internal() (1) check whether WAL is replicated to sync standbys, up to the page lsn of the sequence, and (2) forcibly emit a WAL record if not replicated yet? The similar check is performed at the beginning of SyncRepWaitForLSN(), so probably we can reuse that code.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Dec 2021 13:56:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/22/21 05:56, Fujii Masao wrote:\n> \n> \n> On 2021/12/22 10:57, Tomas Vondra wrote:\n>>\n>>\n>> On 12/19/21 04:03, Amit Kapila wrote:\n>>> On Sat, Dec 18, 2021 at 7:24 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> while working on logical decoding of sequences, I ran into an issue \n>>>> with\n>>>> nextval() in a transaction that rolls back, described in [1]. But after\n>>>> thinking about it a bit more (and chatting with Petr Jelinek), I think\n>>>> this issue affects physical sync replication too.\n>>>>\n>>>> Imagine you have a primary <-> sync_replica cluster, and you do this:\n>>>>\n>>>>     CREATE SEQUENCE s;\n>>>>\n>>>>     -- shutdown the sync replica\n>>>>\n>>>>     BEGIN;\n>>>>     SELECT nextval('s') FROM generate_series(1,50);\n>>>>     ROLLBACK;\n>>>>\n>>>>     BEGIN;\n>>>>     SELECT nextval('s');\n>>>>     COMMIT;\n>>>>\n>>>> The natural expectation would be the COMMIT gets stuck, waiting for the\n>>>> sync replica (which is not running), right? But it does not.\n>>>>\n>>>\n>>> How about if we always WAL log the first sequence change in a \n>>> transaction?\n>>>\n>>\n>> I've been thinking about doing something like this, but I think it \n>> would not have any significant advantages compared to using \n>> \"SEQ_LOG_VALS 0\". It would still have the same performance hit for \n>> plain nextval() calls, and there's no measurable impact on simple \n>> workloads that already write WAL in transactions even with \n>> SEQ_LOG_VALS 0.\n> \n> Just idea; if wal_level > minimal, how about making nextval_internal() \n> (1) check whether WAL is replicated to sync standbys, up to the page lsn \n> of the sequence, and (2) forcibly emit a WAL record if not replicated \n> yet? The similar check is performed at the beginning of \n> SyncRepWaitForLSN(), so probably we can reuse that code.\n> \n\nInteresting idea, but I think it has a couple of issues :-(\n\n1) We'd need to know the LSN of the last WAL record for any given \nsequence, and we'd need to communicate that between backends somehow. \nWhich seems rather tricky to do without affecting performance.\n\n2) SyncRepWaitForLSN() is used only in commit-like situations, and it's \na simple wait, not a decision to write more WAL. Environments without \nsync replicas are affected by this too - yes, the data loss issue is not \nthere, but the amount of WAL is still increased.\n\nIIRC sync_standby_names can change while a transaction is running, even \njust right before commit, at which point we can't just go back in time \nand generate WAL for sequences accessed earlier. But we still need to \nensure the sequence is properly replicated.\n\n3) I don't think it'd actually reduce the amount of WAL records in \nenvironments with many sessions (incrementing the same sequence). In \nthose cases the WAL (generated by in-progress xact from another session) \nis likely to not be flushed, so we'd generate the extra WAL record. (And \nif the other backends would need flush LSN of this new WAL record, which \nwould make it more likely they have to generate WAL too.)\n\n\nSo I don't think this would actually help much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Dec 2021 13:11:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 2021/12/22 21:11, Tomas Vondra wrote:\n> Interesting idea, but I think it has a couple of issues :-(\n\nThanks for the review!\n\n> 1) We'd need to know the LSN of the last WAL record for any given sequence, and we'd need to communicate that between backends somehow. Which seems rather tricky to do without affecting performance.\n\nHow about using the page lsn for the sequence? nextval_internal() already uses that to check whether it's less than or equal to checkpoint redo location.\n\n\n> 2) SyncRepWaitForLSN() is used only in commit-like situations, and it's a simple wait, not a decision to write more WAL. Environments without sync replicas are affected by this too - yes, the data loss issue is not there, but the amount of WAL is still increased.\n\nHow about reusing only a part of code in SyncRepWaitForLSN()? Attached is the PoC patch that implemented what I'm thinking.\n\n\n> IIRC sync_standby_names can change while a transaction is running, even just right before commit, at which point we can't just go back in time and generate WAL for sequences accessed earlier. But we still need to ensure the sequence is properly replicated.\n\nYes. In the PoC patch, SyncRepNeedsWait() still checks sync_standbys_defined and uses SyncRepWaitMode. But they should not be checked nor used because their values can be changed on the fly, as you pointed out. Probably SyncRepNeedsWait() will need to be changed so that it doesn't use them.\n\n\n> 3) I don't think it'd actually reduce the amount of WAL records in environments with many sessions (incrementing the same sequence). In those cases the WAL (generated by in-progress xact from another session) is likely to not be flushed, so we'd generate the extra WAL record. (And if the other backends would need flush LSN of this new WAL record, which would make it more likely they have to generate WAL too.)\n\nWith the PoC patch, only when previous transaction that executed nextval() and caused WAL record is aborted, subsequent nextval() generates additional WAL record. So this approach can reduce WAL volume than other approach?\n\nIn the PoC patch, to reduce WAL volume more, it might be better to make nextval_internal() update XactLastRecEnd and assign XID rather than emitting new WAL record, when SyncRepNeedsWait() returns true.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 23 Dec 2021 02:50:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/21/21 03:49, Tomas Vondra wrote:\n> On 12/21/21 02:01, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> OK, I did a quick test with two very simple benchmarks - simple select\n>>> from a sequence, and 'pgbench -N' on scale 1. Benchmark was on current\n>>> master, patched means SEQ_LOG_VALS was set to 1.\n>>\n>> But ... pgbench -N doesn't use sequences at all, does it?\n>>\n>> Probably inserts into a table with a serial column would constitute a\n>> plausible real-world case.\n>>\n> \n> D'oh! For some reason I thought pgbench has a sequence on the history \n> table, but clearly I was mistaken. There's another thinko, because after \n> inspecting pg_waldump output I realized \"SEQ_LOG_VALS 1\" actually logs \n> only every 2nd increment. So it should be \"SEQ_LOG_VALS 0\".\n> \n> So I repeated the test fixing SEQ_LOG_VALS, and doing the pgbench with a \n> table like this:\n> \n>   create table test (a serial, b int);\n> \n> and a script doing\n> \n>   insert into test (b) values (1);\n> \n> The results look like this:\n> \n> 1) select nextval('s');\n> \n>      clients          1         4\n>     ------------------------------\n>      master       39533    124998\n>      patched       3748      9114\n>     ------------------------------\n>      diff          -91%      -93%\n> \n> \n> 2) insert into test (b) values (1);\n> \n>      clients          1         4\n>     ------------------------------\n>      master        3718      9188\n>      patched       3698      9209\n>     ------------------------------\n>      diff            0%        0%\n> \n> So the nextval() results are a bit worse, due to not caching 1/2 the \n> nextval calls. The -90% is roughly expected, due to generating about 32x \n> more WAL (and having to wait for commit).\n> \n> But results for the more realistic insert workload are about the same as \n> before (i.e. no measurable difference). Also kinda expected, because \n> those transactions have to wait for WAL anyway.\n> \n\nAttached is a patch tweaking WAL logging - in wal_level=minimal we do \nthe same thing as now, in higher levels we log every sequence fetch.\n\nAfter thinking about this a bit more, I think even the nextval workload \nis not such a big issue, because we can set cache for the sequences. \nUntil now this had fairly limited impact, but it can significantly \nreduce the performance drop caused by WAL-logging every sequence fetch.\n\nI've repeated the nextval test on a different machine (the one I used \nbefore is busy with something else), and the results look like this:\n\n1) 1 client\n\n cache 1 32 128\n --------------------------------------\n master 13975 14425 19886\n patched 886 7900 18397\n --------------------------------------\n diff -94% -45% -7%\n\n4) 4 clients\n\n cache 1 32 128\n -----------------------------------------\n master 8338 12849 18248\n patched 331 8124 18983\n -----------------------------------------\n diff -96% -37% 4%\n\nSo I think this makes it acceptable / manageable. Of course, this means \nthe values are much less monotonous (across backends), but I don't think \nwe really promised that. And I doubt anyone is really using sequences \nlike this (just nextval) in performance critical use cases.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 22 Dec 2021 19:49:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/22/21 18:50, Fujii Masao wrote:\n> \n> \n> On 2021/12/22 21:11, Tomas Vondra wrote:\n>> Interesting idea, but I think it has a couple of issues :-(\n> \n> Thanks for the review!\n> \n>> 1) We'd need to know the LSN of the last WAL record for any given \n>> sequence, and we'd need to communicate that between backends somehow. \n>> Which seems rather tricky to do without affecting performance.\n> \n> How about using the page lsn for the sequence? nextval_internal() \n> already uses that to check whether it's less than or equal to checkpoint \n> redo location.\n> \n\nHmm, maybe.\n\n> \n>> 2) SyncRepWaitForLSN() is used only in commit-like situations, and \n>> it's a simple wait, not a decision to write more WAL. Environments \n>> without sync replicas are affected by this too - yes, the data loss \n>> issue is not there, but the amount of WAL is still increased.\n> \n> How about reusing only a part of code in SyncRepWaitForLSN()? Attached \n> is the PoC patch that implemented what I'm thinking.\n> \n> \n>> IIRC sync_standby_names can change while a transaction is running, \n>> even just right before commit, at which point we can't just go back in \n>> time and generate WAL for sequences accessed earlier. But we still \n>> need to ensure the sequence is properly replicated.\n> \n> Yes. In the PoC patch, SyncRepNeedsWait() still checks \n> sync_standbys_defined and uses SyncRepWaitMode. But they should not be \n> checked nor used because their values can be changed on the fly, as you \n> pointed out. Probably SyncRepNeedsWait() will need to be changed so that \n> it doesn't use them.\n> \n\nRight. I think the data loss with sync standby is merely a symptom, not \nthe root cause. We'd need to deduce the LSN for which to wait at commit.\n\n> \n>> 3) I don't think it'd actually reduce the amount of WAL records in \n>> environments with many sessions (incrementing the same sequence). In \n>> those cases the WAL (generated by in-progress xact from another \n>> session) is likely to not be flushed, so we'd generate the extra WAL \n>> record. (And if the other backends would need flush LSN of this new \n>> WAL record, which would make it more likely they have to generate WAL \n>> too.)\n> \n> With the PoC patch, only when previous transaction that executed \n> nextval() and caused WAL record is aborted, subsequent nextval() \n> generates additional WAL record. So this approach can reduce WAL volume \n> than other approach?\n> > In the PoC patch, to reduce WAL volume more, it might be better to make\n> nextval_internal() update XactLastRecEnd and assign XID rather than \n> emitting new WAL record, when SyncRepNeedsWait() returns true.\n> \n\nYes, but I think there are other cases. For example the WAL might have \nbeen generated by another backend, in a transaction that might be still \nrunning. In which case I don't see how updating XactLastRecEnd in \nnextval_internal would fix this, right?\n\nI did some experiments with increasing CACHE for the sequence, and that \nmostly eliminates the overhead. See the message I sent a couple minutes \nago. IMHO that's a reasonable solution for the tiny number of people \nusing nextval() in a way that'd be affected by this (i.e. without \nwriting anything else in the xact).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Dec 2021 20:00:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 2021/12/23 3:49, Tomas Vondra wrote:\n> Attached is a patch tweaking WAL logging - in wal_level=minimal we do the same thing as now, in higher levels we log every sequence fetch.\n\nThanks for the patch!\n\nWith the patch, I found that the regression test for sequences failed.\n\n+\t\t\tfetch = log = fetch;\n\nThis should be \"log = fetch\"?\n\nOn second thought, originally a sequence doesn't guarantee that the value already returned by nextval() will never be returned by subsequent nextval() after the server crash recovery. That is, nextval() may return the same value across crash recovery. Is this understanding right? For example, this case can happen if the server crashes after nextval() returned the value but before WAL for the sequence was flushed to the permanent storage. So it's not a bug that sync standby may return the same value as already returned in the primary because the corresponding WAL has not been replicated yet, isn't it?\n\nBTW, if the returned value is stored in database, the same value is guaranteed not to be returned again after the server crash or by sync standby. Because in that case the WAL of the transaction storing that value is flushed and replicated.\n\n> So I think this makes it acceptable / manageable. Of course, this means the values are much less monotonous (across backends), but I don't think we really promised that. And I doubt anyone is really using sequences like this (just nextval) in performance critical use cases.\n\nI think that this approach is not acceptable to some users. So, if we actually adopt WAL-logging every sequence fetch, also how about exposing SEQ_LOG_VALS as reloption for a sequence? If so, those who want to log every sequence fetch can set this SEQ_LOG_VALS reloption to 0. OTOH, those who prefer the current behavior in spite of the risk we're discussing at this thread can set the reloption to 32 like it is for now, for example.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 23 Dec 2021 23:42:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/23/21 15:42, Fujii Masao wrote:\n> \n> \n> On 2021/12/23 3:49, Tomas Vondra wrote:\n>> Attached is a patch tweaking WAL logging - in wal_level=minimal we do \n>> the same thing as now, in higher levels we log every sequence fetch.\n> \n> Thanks for the patch!\n> \n> With the patch, I found that the regression test for sequences failed.\n> \n> +            fetch = log = fetch;\n> \n> This should be \"log = fetch\"?\n> \n> On second thought, originally a sequence doesn't guarantee that the \n> value already returned by nextval() will never be returned by subsequent \n> nextval() after the server crash recovery. That is, nextval() may return \n> the same value across crash recovery. Is this understanding right? For \n> example, this case can happen if the server crashes after nextval() \n> returned the value but before WAL for the sequence was flushed to the \n> permanent storage.\n\nI think the important step is commit. We don't guarantee anything for \nchanges in uncommitted transactions. If you do nextval in a transaction \nand the server crashes before the WAL gets flushed before COMMIT, then \nyes, nextval may generate the same nextval again. But after commit that \nis not OK - it must not happen.\n\n> So it's not a bug that sync standby may return the same value as\n> already returned in the primary because the corresponding WAL has not\n> been replicated yet, isn't it?\n> \n\nNo, I don't think so. Once the COMMIT happens (and gets confirmed by the \nsync standby), it should be possible to failover to the sync replica \nwithout losing any data in committed transaction. Generating duplicate \nvalues is a clear violation of that.\n\nIMHO the fact that we allow a transaction to commit (even just locally) \nwithout flushing all the WAL it depends on is clearly a data loss bug.\n\n> BTW, if the returned value is stored in database, the same value is \n> guaranteed not to be returned again after the server crash or by sync \n> standby. Because in that case the WAL of the transaction storing that \n> value is flushed and replicated.\n> \n\nTrue, assuming the table is WAL-logged etc. I agree the issue may be \naffecting a fairly small fraction of workloads, because most people use \nsequences to generate data for inserts etc.\n\n>> So I think this makes it acceptable / manageable. Of course, this \n>> means the values are much less monotonous (across backends), but I \n>> don't think we really promised that. And I doubt anyone is really \n>> using sequences like this (just nextval) in performance critical use \n>> cases.\n> \n> I think that this approach is not acceptable to some users. So, if we \n> actually adopt WAL-logging every sequence fetch, also how about exposing \n> SEQ_LOG_VALS as reloption for a sequence? If so, those who want to log \n> every sequence fetch can set this SEQ_LOG_VALS reloption to 0. OTOH, \n> those who prefer the current behavior in spite of the risk we're \n> discussing at this thread can set the reloption to 32 like it is for \n> now, for example.\n> \n\nI think it'd be worth explaining why you think it's not acceptable?\n\nI've demonstrated the impact on regular workloads (with other changes \nthat write stuff to WAL) is not measurable, and enabling sequence \ncaching eliminates most of the overhead for the rare corner case \nworkloads if needed. It does generate a bit more WAL, but the sequence \nWAL records are pretty tiny.\n\nI'm opposed to adding relooptions that affect correctness - it just \nseems like a bad idea to me. Moreover setting the CACHE for a sequence \ndoes almost the same thing - if you set CACHE 32, we only generate WAL \nonce every 32 increments. The only difference is that this cache is not \nshared between backends, so one backend will generate 1,2,3,... and \nanother backend will generate 33,34,35,... etc. I don't think that's a \nproblem, because if you want strictly monotonous / gap-less sequences \nyou can't use our sequences anyway. Yes, with short-lived backends this \nmay consume the sequences faster, but well - short-lived backends are \nexpensive anyway and overflowing bigserial is still unlikely.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Dec 2021 19:50:22 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "At Thu, 23 Dec 2021 19:50:22 +0100, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote in \n> On 12/23/21 15:42, Fujii Masao wrote:\n> > On 2021/12/23 3:49, Tomas Vondra wrote:\n> >> Attached is a patch tweaking WAL logging - in wal_level=minimal we do\n> >> the same thing as now, in higher levels we log every sequence fetch.\n> > Thanks for the patch!\n> > With the patch, I found that the regression test for sequences failed.\n> > +            fetch = log = fetch;\n> > This should be \"log = fetch\"?\n> > On second thought, originally a sequence doesn't guarantee that the\n> > value already returned by nextval() will never be returned by\n> > subsequent nextval() after the server crash recovery. That is,\n> > nextval() may return the same value across crash recovery. Is this\n> > understanding right? For example, this case can happen if the server\n> > crashes after nextval() returned the value but before WAL for the\n> > sequence was flushed to the permanent storage.\n> \n> I think the important step is commit. We don't guarantee anything for\n> changes in uncommitted transactions. If you do nextval in a\n> transaction and the server crashes before the WAL gets flushed before\n> COMMIT, then yes, nextval may generate the same nextval again. But\n> after commit that is not OK - it must not happen.\n\nI don't mean to stand on Fujii-san's side particularly, but it seems\nto me sequences of RDBSs are not rolled back generally. Some googling\ntold me that at least Oracle (documented), MySQL, DB2 and MS-SQL\nserver doesn't rewind sequences at rollback, that is, sequences are\nincremented independtly from transaction control. It seems common to\nthink that two nextval() calls for the same sequence must not return\nthe same value in any context.\n\n> > So it's not a bug that sync standby may return the same value as\n> > already returned in the primary because the corresponding WAL has not\n> > been replicated yet, isn't it?\n> > \n> \n> No, I don't think so. Once the COMMIT happens (and gets confirmed by\n> the sync standby), it should be possible to failover to the sync\n> replica without losing any data in committed transaction. Generating\n> duplicate values is a clear violation of that.\n\nSo, strictly speaking, that is a violation of the constraint I\nmentioned regardless whether the transaction is committed or\nnot. However we have technical limitations as below.\n\n> IMHO the fact that we allow a transaction to commit (even just\n> locally) without flushing all the WAL it depends on is clearly a data\n> loss bug.\n> \n> > BTW, if the returned value is stored in database, the same value is\n> > guaranteed not to be returned again after the server crash or by sync\n> > standby. Because in that case the WAL of the transaction storing that\n> > value is flushed and replicated.\n> > \n> \n> True, assuming the table is WAL-logged etc. I agree the issue may be\n> affecting a fairly small fraction of workloads, because most people\n> use sequences to generate data for inserts etc.\n\nIt seems to me, from the fact that sequences are designed explicitly\nuntransactional and that behavior is widely adopted, the discussion\nmight be missing some significant use-cases. But there's a\npossibility that the spec of sequence came from some technical\nlimitation in the past, but I'm not sure..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Dec 2021 14:37:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/24/21 06:37, Kyotaro Horiguchi wrote:\n> At Thu, 23 Dec 2021 19:50:22 +0100, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote in\n>> On 12/23/21 15:42, Fujii Masao wrote:\n>>> On 2021/12/23 3:49, Tomas Vondra wrote:\n>>>> Attached is a patch tweaking WAL logging - in wal_level=minimal we do\n>>>> the same thing as now, in higher levels we log every sequence fetch.\n>>> Thanks for the patch!\n>>> With the patch, I found that the regression test for sequences failed.\n>>> +            fetch = log = fetch;\n>>> This should be \"log = fetch\"?\n>>> On second thought, originally a sequence doesn't guarantee that the\n>>> value already returned by nextval() will never be returned by\n>>> subsequent nextval() after the server crash recovery. That is,\n>>> nextval() may return the same value across crash recovery. Is this\n>>> understanding right? For example, this case can happen if the server\n>>> crashes after nextval() returned the value but before WAL for the\n>>> sequence was flushed to the permanent storage.\n>>\n>> I think the important step is commit. We don't guarantee anything for\n>> changes in uncommitted transactions. If you do nextval in a\n>> transaction and the server crashes before the WAL gets flushed before\n>> COMMIT, then yes, nextval may generate the same nextval again. But\n>> after commit that is not OK - it must not happen.\n> \n> I don't mean to stand on Fujii-san's side particularly, but it seems\n> to me sequences of RDBSs are not rolled back generally. Some googling\n> told me that at least Oracle (documented), MySQL, DB2 and MS-SQL\n> server doesn't rewind sequences at rollback, that is, sequences are\n> incremented independtly from transaction control. It seems common to\n> think that two nextval() calls for the same sequence must not return\n> the same value in any context.\n> \n\nYes, sequences are not rolled back on abort generally. That would \nrequire much stricter locking, and that'd go against using sequences in \nconcurrent sessions.\n\nBut we're not talking about sequence rollback - we're talking about data \nloss, caused by failure to flush WAL for a sequence. But that affects \nthe *current* code too, and to much greater extent.\n\nConsider this:\n\nBEGIN;\nSELECT nextval('s') FROM generate_series(1,1000) s(i);\nROLLBACK; -- or crash of a different backend\n\nBEGIN;\nSELECT nextval('s');\nCOMMIT;\n\nWith the current code, this may easily lose the WAL, and we'll generate \nduplicate values from the sequence. We pretty much ignore the COMMIT.\n\nWith the proposed change to WAL logging, that is not possible. The \nCOMMIT flushes enough WAL to prevent this issue.\n\nSo this actually makes this issue less severe.\n\nMaybe I'm missing some important detail, though. Can you show an example \nwhere the proposed changes make the issue worse?\n\n>>> So it's not a bug that sync standby may return the same value as\n>>> already returned in the primary because the corresponding WAL has not\n>>> been replicated yet, isn't it?\n>>>\n>>\n>> No, I don't think so. Once the COMMIT happens (and gets confirmed by\n>> the sync standby), it should be possible to failover to the sync\n>> replica without losing any data in committed transaction. Generating\n>> duplicate values is a clear violation of that.\n> \n> So, strictly speaking, that is a violation of the constraint I\n> mentioned regardless whether the transaction is committed or\n> not. However we have technical limitations as below.\n> \n\nI don't follow. What violates what?\n\nIf the transaction commits (and gets a confirmation from sync replica), \nthe modified WAL logging prevents duplicate values. It does nothing for \nuncommitted transactions. Seems like an improvement to me.\n\n>> IMHO the fact that we allow a transaction to commit (even just\n>> locally) without flushing all the WAL it depends on is clearly a data\n>> loss bug.\n>>\n>>> BTW, if the returned value is stored in database, the same value is\n>>> guaranteed not to be returned again after the server crash or by sync\n>>> standby. Because in that case the WAL of the transaction storing that\n>>> value is flushed and replicated.\n>>>\n>>\n>> True, assuming the table is WAL-logged etc. I agree the issue may be\n>> affecting a fairly small fraction of workloads, because most people\n>> use sequences to generate data for inserts etc.\n> \n> It seems to me, from the fact that sequences are designed explicitly\n> untransactional and that behavior is widely adopted, the discussion\n> might be missing some significant use-cases. But there's a\n> possibility that the spec of sequence came from some technical\n> limitation in the past, but I'm not sure..\n> \n\nNo idea. IMHO from the correctness / behavior point of view, the \nmodified logging is an improvement. The only issue is the additional \noverhead, and I think the cache addresses that quite well.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Dec 2021 08:23:13 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "At Fri, 24 Dec 2021 08:23:13 +0100, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote in \n> \n> \n> On 12/24/21 06:37, Kyotaro Horiguchi wrote:\n> > At Thu, 23 Dec 2021 19:50:22 +0100, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote in\n> >> On 12/23/21 15:42, Fujii Masao wrote:\n> >>> On 2021/12/23 3:49, Tomas Vondra wrote:\n> >>>> Attached is a patch tweaking WAL logging - in wal_level=minimal we do\n> >>>> the same thing as now, in higher levels we log every sequence fetch.\n> >>> Thanks for the patch!\n> >>> With the patch, I found that the regression test for sequences failed.\n> >>> +            fetch = log = fetch;\n> >>> This should be \"log = fetch\"?\n> >>> On second thought, originally a sequence doesn't guarantee that the\n> >>> value already returned by nextval() will never be returned by\n> >>> subsequent nextval() after the server crash recovery. That is,\n> >>> nextval() may return the same value across crash recovery. Is this\n> >>> understanding right? For example, this case can happen if the server\n> >>> crashes after nextval() returned the value but before WAL for the\n> >>> sequence was flushed to the permanent storage.\n> >>\n> >> I think the important step is commit. We don't guarantee anything for\n> >> changes in uncommitted transactions. If you do nextval in a\n> >> transaction and the server crashes before the WAL gets flushed before\n> >> COMMIT, then yes, nextval may generate the same nextval again. But\n> >> after commit that is not OK - it must not happen.\n> > I don't mean to stand on Fujii-san's side particularly, but it seems\n> > to me sequences of RDBSs are not rolled back generally. Some googling\n> > told me that at least Oracle (documented), MySQL, DB2 and MS-SQL\n> > server doesn't rewind sequences at rollback, that is, sequences are\n> > incremented independtly from transaction control. It seems common to\n> > think that two nextval() calls for the same sequence must not return\n> > the same value in any context.\n> > \n> \n> Yes, sequences are not rolled back on abort generally. That would\n> require much stricter locking, and that'd go against using sequences\n> in concurrent sessions.\n\nI thinks so.\n\n> But we're not talking about sequence rollback - we're talking about\n> data loss, caused by failure to flush WAL for a sequence. But that\n> affects the *current* code too, and to much greater extent.\n\nAh, yes, I don't object to that aspect.\n\n> Consider this:\n> \n> BEGIN;\n> SELECT nextval('s') FROM generate_series(1,1000) s(i);\n> ROLLBACK; -- or crash of a different backend\n> \n> BEGIN;\n> SELECT nextval('s');\n> COMMIT;\n> \n> With the current code, this may easily lose the WAL, and we'll\n> generate duplicate values from the sequence. We pretty much ignore the\n> COMMIT.\n>\n> With the proposed change to WAL logging, that is not possible. The\n> COMMIT flushes enough WAL to prevent this issue.\n> \n> So this actually makes this issue less severe.\n> \n> Maybe I'm missing some important detail, though. Can you show an\n> example where the proposed changes make the issue worse?\n\nNo. It seems to me improvoment at least from the current state, for\nthe reason you mentioned.\n\n> >>> So it's not a bug that sync standby may return the same value as\n> >>> already returned in the primary because the corresponding WAL has not\n> >>> been replicated yet, isn't it?\n> >>>\n> >>\n> >> No, I don't think so. Once the COMMIT happens (and gets confirmed by\n> >> the sync standby), it should be possible to failover to the sync\n> >> replica without losing any data in committed transaction. Generating\n> >> duplicate values is a clear violation of that.\n> > So, strictly speaking, that is a violation of the constraint I\n> > mentioned regardless whether the transaction is committed or\n> > not. However we have technical limitations as below.\n> > \n> \n> I don't follow. What violates what?\n> \n> If the transaction commits (and gets a confirmation from sync\n> replica), the modified WAL logging prevents duplicate values. It does\n> nothing for uncommitted transactions. Seems like an improvement to me.\n\nSorry for the noise. I misunderstand that ROLLBACK is being changed to\nrollback sequences.\n\n> No idea. IMHO from the correctness / behavior point of view, the\n> modified logging is an improvement. The only issue is the additional\n> overhead, and I think the cache addresses that quite well.\n\nNow I understand the story here.\n\nI agree that the patch is improvment from the current behavior.\nI agree that the overhead is eventually-nothing for WAL-emitting workloads.\n\nStill, as Fujii-san concerns, I'm afraid that some people may suffer\nthe degradation the patch causes. I wonder it is acceptable to get\nback the previous behavior by exposing SEQ_LOG_VALS itself or a\nboolean to do that, as a 'not-recommended-to-use' variable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:04:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 12/24/21 09:04, Kyotaro Horiguchi wrote:\n>>> ...\n>>> So, strictly speaking, that is a violation of the constraint I\n>>> mentioned regardless whether the transaction is committed or\n>>> not. However we have technical limitations as below.\n>>>\n>>\n>> I don't follow. What violates what?\n>>\n>> If the transaction commits (and gets a confirmation from sync\n>> replica), the modified WAL logging prevents duplicate values. It does\n>> nothing for uncommitted transactions. Seems like an improvement to me.\n> \n> Sorry for the noise. I misunderstand that ROLLBACK is being changed to\n> rollback sequences.\n> \n\nNo problem, this part of the code is certainly rather confusing due to \nseveral layers of caching and these WAL-logging optimizations.\n\n>> No idea. IMHO from the correctness / behavior point of view, the\n>> modified logging is an improvement. The only issue is the additional\n>> overhead, and I think the cache addresses that quite well.\n> \n> Now I understand the story here.\n> \n> I agree that the patch is improvment from the current behavior.\n> I agree that the overhead is eventually-nothing for WAL-emitting workloads.\n> \n\nOK, thanks.\n\n> Still, as Fujii-san concerns, I'm afraid that some people may suffer\n> the degradation the patch causes. I wonder it is acceptable to get\n> back the previous behavior by exposing SEQ_LOG_VALS itself or a\n> boolean to do that, as a 'not-recommended-to-use' variable.\n> \n\nMaybe, but what would such workload look like? Based on the tests I did, \nsuch workload probably can't generate any WAL. The amount of WAL added \nby the change is tiny, the regression is caused by having to flush WAL.\n\nThe only plausible workload I can think of is just calling nextval, and \nthe cache pretty much fixes that.\n\nFWIW I plan to explore the idea of looking at sequence page LSN, and \nflushing up to that position.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Dec 2021 11:40:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 24.12.21 09:04, Kyotaro Horiguchi wrote:\n> Still, as Fujii-san concerns, I'm afraid that some people may suffer\n> the degradation the patch causes. I wonder it is acceptable to get\n> back the previous behavior by exposing SEQ_LOG_VALS itself or a\n> boolean to do that, as a 'not-recommended-to-use' variable.\n\nThere is also the possibility of unlogged sequences if you want to avoid \nthe WAL logging and get higher performance.\n\n\n", "msg_date": "Mon, 27 Dec 2021 21:24:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/27/21 21:24, Peter Eisentraut wrote:\n> On 24.12.21 09:04, Kyotaro Horiguchi wrote:\n>> Still, as Fujii-san concerns, I'm afraid that some people may suffer\n>> the degradation the patch causes.  I wonder it is acceptable to get\n>> back the previous behavior by exposing SEQ_LOG_VALS itself or a\n>> boolean to do that, as a 'not-recommended-to-use' variable.\n> \n> There is also the possibility of unlogged sequences if you want to avoid \n> the WAL logging and get higher performance.\n\nBut unlogged sequences are not supported:\n\n test=# create unlogged sequence s;\n ERROR: unlogged sequences are not supported\n\nAnd even if we did, what would be the behavior after crash? For tables \nwe discard the contents, so for sequences we'd probably discard it too \nand start from scratch? That doesn't seem particularly useful.\n\nWe could also write / fsync the sequence buffer, but that has other \ndownsides. But that's not implemented either, and it's certainly out of \nscope for this patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 28 Dec 2021 02:39:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 2021/12/24 19:40, Tomas Vondra wrote:\n> Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.\n> \n> The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.\n\nSome users don't want to increase cache setting, do they? Because\n\n- They may expect that setval() affects all subsequent nextval(). But if cache is set to greater than one, the value set by setval() doesn't affect other backends until they consumed all the cached sequence values.\n- They may expect that the value returned from nextval() is basically increased monotonically. If cache is set to greater than one, subsequent nextval() can easily return smaller value than one returned by previous nextval().\n- They may want to avoid \"hole\" of a sequence as much as possible, e.g., as far as the server is running normally. If cache is set to greater than one, such \"hole\" can happen even thought the server doesn't crash yet.\n\n\n> FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.\n\nSounds great, thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 28 Dec 2021 15:56:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Sequence validation by step, in total is great. If the sequence is Familie\nor professional, does it make sense to a have a total validation by an\nexpert. I can only say true by chi square Networks, but would a medical\nopinion be an improvement?\n\nFujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021,\n07:56:\n\n>\n>\n> On 2021/12/24 19:40, Tomas Vondra wrote:\n> > Maybe, but what would such workload look like? Based on the tests I did,\n> such workload probably can't generate any WAL. The amount of WAL added by\n> the change is tiny, the regression is caused by having to flush WAL.\n> >\n> > The only plausible workload I can think of is just calling nextval, and\n> the cache pretty much fixes that.\n>\n> Some users don't want to increase cache setting, do they? Because\n>\n> - They may expect that setval() affects all subsequent nextval(). But if\n> cache is set to greater than one, the value set by setval() doesn't affect\n> other backends until they consumed all the cached sequence values.\n> - They may expect that the value returned from nextval() is basically\n> increased monotonically. If cache is set to greater than one, subsequent\n> nextval() can easily return smaller value than one returned by previous\n> nextval().\n> - They may want to avoid \"hole\" of a sequence as much as possible, e.g.,\n> as far as the server is running normally. If cache is set to greater than\n> one, such \"hole\" can happen even thought the server doesn't crash yet.\n>\n>\n> > FWIW I plan to explore the idea of looking at sequence page LSN, and\n> flushing up to that position.\n>\n> Sounds great, thanks!\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\n\nSequence validation by step, in total is great. If the sequence is Familie or professional, does it make sense to a have a total validation by an expert. I can only say true by chi square Networks, but would a medical opinion be an improvement?Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021, 07:56:\n\nOn 2021/12/24 19:40, Tomas Vondra wrote:\n> Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.\n> \n> The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.\n\nSome users don't want to increase cache setting, do they? Because\n\n- They may expect that setval() affects all subsequent nextval(). But if cache is set to greater than one, the value set by setval() doesn't affect other backends until they consumed all the cached sequence values.\n- They may expect that the value returned from nextval() is basically increased monotonically. If cache is set to greater than one, subsequent nextval() can easily return smaller value than one returned by previous nextval().\n- They may want to avoid \"hole\" of a sequence as much as possible, e.g., as far as the server is running normally. If cache is set to greater than one, such \"hole\" can happen even thought the server doesn't crash yet.\n\n\n> FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.\n\nSounds great, thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 28 Dec 2021 09:28:30 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Hi\n\nút 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n> Sequence validation by step, in total is great. If the sequence is Familie\n> or professional, does it make sense to a have a total validation by an\n> expert. I can only say true by chi square Networks, but would a medical\n> opinion be an improvement?\n>\n\nIs it generated by boot or by a human?\n\n\n\n> Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021,\n> 07:56:\n>\n>>\n>>\n>> On 2021/12/24 19:40, Tomas Vondra wrote:\n>> > Maybe, but what would such workload look like? Based on the tests I\n>> did, such workload probably can't generate any WAL. The amount of WAL added\n>> by the change is tiny, the regression is caused by having to flush WAL.\n>> >\n>> > The only plausible workload I can think of is just calling nextval, and\n>> the cache pretty much fixes that.\n>>\n>> Some users don't want to increase cache setting, do they? Because\n>>\n>> - They may expect that setval() affects all subsequent nextval(). But if\n>> cache is set to greater than one, the value set by setval() doesn't affect\n>> other backends until they consumed all the cached sequence values.\n>> - They may expect that the value returned from nextval() is basically\n>> increased monotonically. If cache is set to greater than one, subsequent\n>> nextval() can easily return smaller value than one returned by previous\n>> nextval().\n>> - They may want to avoid \"hole\" of a sequence as much as possible, e.g.,\n>> as far as the server is running normally. If cache is set to greater than\n>> one, such \"hole\" can happen even thought the server doesn't crash yet.\n>>\n>>\n>> > FWIW I plan to explore the idea of looking at sequence page LSN, and\n>> flushing up to that position.\n>>\n>> Sounds great, thanks!\n>>\n>> Regards,\n>>\n>> --\n>> Fujii Masao\n>> Advanced Computing Technology Center\n>> Research and Development Headquarters\n>> NTT DATA CORPORATION\n>>\n>>\n>>\n\nHiút 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sequence validation by step, in total is great. If the sequence is Familie or professional, does it make sense to a have a total validation by an expert. I can only say true by chi square Networks, but would a medical opinion be an improvement?Is it generated by boot or by a human? Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021, 07:56:\n\nOn 2021/12/24 19:40, Tomas Vondra wrote:\n> Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.\n> \n> The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.\n\nSome users don't want to increase cache setting, do they? Because\n\n- They may expect that setval() affects all subsequent nextval(). But if cache is set to greater than one, the value set by setval() doesn't affect other backends until they consumed all the cached sequence values.\n- They may expect that the value returned from nextval() is basically increased monotonically. If cache is set to greater than one, subsequent nextval() can easily return smaller value than one returned by previous nextval().\n- They may want to avoid \"hole\" of a sequence as much as possible, e.g., as far as the server is running normally. If cache is set to greater than one, such \"hole\" can happen even thought the server doesn't crash yet.\n\n\n> FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.\n\nSounds great, thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 28 Dec 2021 09:50:39 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> schrieb am Di., 28. Dez. 2021,\n09:51:\n\n> Hi\n>\n> út 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n> napsal:\n>\n>> Sequence validation by step, in total is great. If the sequence is\n>> Familie or professional, does it make sense to a have a total validation by\n>> an expert. I can only say true by chi square Networks, but would a medical\n>> opinion be an improvement?\n>>\n>\n> Is it generated by boot or by a human?\n>\n\nI validation my family and Société, only when them Show me not their\nSekret, part of their truth. Works fine by a Boot level, as far as I can\ndetektei, without the Boot showing up 😉\n\n>\n>\n>\n>> Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021,\n>> 07:56:\n>>\n>>>\n>>>\n>>> On 2021/12/24 19:40, Tomas Vondra wrote:\n>>> > Maybe, but what would such workload look like? Based on the tests I\n>>> did, such workload probably can't generate any WAL. The amount of WAL added\n>>> by the change is tiny, the regression is caused by having to flush WAL.\n>>> >\n>>> > The only plausible workload I can think of is just calling nextval,\n>>> and the cache pretty much fixes that.\n>>>\n>>> Some users don't want to increase cache setting, do they? Because\n>>>\n>>> - They may expect that setval() affects all subsequent nextval(). But if\n>>> cache is set to greater than one, the value set by setval() doesn't affect\n>>> other backends until they consumed all the cached sequence values.\n>>> - They may expect that the value returned from nextval() is basically\n>>> increased monotonically. If cache is set to greater than one, subsequent\n>>> nextval() can easily return smaller value than one returned by previous\n>>> nextval().\n>>> - They may want to avoid \"hole\" of a sequence as much as possible, e.g.,\n>>> as far as the server is running normally. If cache is set to greater than\n>>> one, such \"hole\" can happen even thought the server doesn't crash yet.\n>>>\n>>>\n>>> > FWIW I plan to explore the idea of looking at sequence page LSN, and\n>>> flushing up to that position.\n>>>\n>>> Sounds great, thanks!\n>>>\n>>> Regards,\n>>>\n>>> --\n>>> Fujii Masao\n>>> Advanced Computing Technology Center\n>>> Research and Development Headquarters\n>>> NTT DATA CORPORATION\n>>>\n>>>\n>>>\n\nPavel Stehule <pavel.stehule@gmail.com> schrieb am Di., 28. Dez. 2021, 09:51:Hiút 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sequence validation by step, in total is great. If the sequence is Familie or professional, does it make sense to a have a total validation by an expert. I can only say true by chi square Networks, but would a medical opinion be an improvement?Is it generated by boot or by a human?I validation my family and Société, only when them Show me not their Sekret, part of their truth. Works fine by a Boot level, as far as I can detektei, without the Boot showing up 😉 Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021, 07:56:\n\nOn 2021/12/24 19:40, Tomas Vondra wrote:\n> Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.\n> \n> The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.\n\nSome users don't want to increase cache setting, do they? Because\n\n- They may expect that setval() affects all subsequent nextval(). But if cache is set to greater than one, the value set by setval() doesn't affect other backends until they consumed all the cached sequence values.\n- They may expect that the value returned from nextval() is basically increased monotonically. If cache is set to greater than one, subsequent nextval() can easily return smaller value than one returned by previous nextval().\n- They may want to avoid \"hole\" of a sequence as much as possible, e.g., as far as the server is running normally. If cache is set to greater than one, such \"hole\" can happen even thought the server doesn't crash yet.\n\n\n> FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.\n\nSounds great, thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 28 Dec 2021 09:53:41 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "út 28. 12. 2021 v 9:53 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n>\n>\n> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Di., 28. Dez. 2021,\n> 09:51:\n>\n>> Hi\n>>\n>> út 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>> napsal:\n>>\n>>> Sequence validation by step, in total is great. If the sequence is\n>>> Familie or professional, does it make sense to a have a total validation by\n>>> an expert. I can only say true by chi square Networks, but would a medical\n>>> opinion be an improvement?\n>>>\n>>\n>> Is it generated by boot or by a human?\n>>\n>\n> I validation my family and Société, only when them Show me not their\n> Sekret, part of their truth. Works fine by a Boot level, as far as I can\n> detektei, without the Boot showing up 😉\n>\n\ndon't spam this mailing list, please\n\nThank you\n\nPavel\n\n\n>>\n>>\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez.\n>>> 2021, 07:56:\n>>>\n>>>>\n>>>>\n>>>> On 2021/12/24 19:40, Tomas Vondra wrote:\n>>>> > Maybe, but what would such workload look like? Based on the tests I\n>>>> did, such workload probably can't generate any WAL. The amount of WAL added\n>>>> by the change is tiny, the regression is caused by having to flush WAL.\n>>>> >\n>>>> > The only plausible workload I can think of is just calling nextval,\n>>>> and the cache pretty much fixes that.\n>>>>\n>>>> Some users don't want to increase cache setting, do they? Because\n>>>>\n>>>> - They may expect that setval() affects all subsequent nextval(). But\n>>>> if cache is set to greater than one, the value set by setval() doesn't\n>>>> affect other backends until they consumed all the cached sequence values.\n>>>> - They may expect that the value returned from nextval() is basically\n>>>> increased monotonically. If cache is set to greater than one, subsequent\n>>>> nextval() can easily return smaller value than one returned by previous\n>>>> nextval().\n>>>> - They may want to avoid \"hole\" of a sequence as much as possible,\n>>>> e.g., as far as the server is running normally. If cache is set to greater\n>>>> than one, such \"hole\" can happen even thought the server doesn't crash yet.\n>>>>\n>>>>\n>>>> > FWIW I plan to explore the idea of looking at sequence page LSN, and\n>>>> flushing up to that position.\n>>>>\n>>>> Sounds great, thanks!\n>>>>\n>>>> Regards,\n>>>>\n>>>> --\n>>>> Fujii Masao\n>>>> Advanced Computing Technology Center\n>>>> Research and Development Headquarters\n>>>> NTT DATA CORPORATION\n>>>>\n>>>>\n>>>>\n\nút 28. 12. 2021 v 9:53 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Di., 28. Dez. 2021, 09:51:Hiút 28. 12. 2021 v 9:28 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sequence validation by step, in total is great. If the sequence is Familie or professional, does it make sense to a have a total validation by an expert. I can only say true by chi square Networks, but would a medical opinion be an improvement?Is it generated by boot or by a human?I validation my family and Société, only when them Show me not their Sekret, part of their truth. Works fine by a Boot level, as far as I can detektei, without the Boot showing up 😉don't spam this mailing list, pleaseThank youPavel Fujii Masao <masao.fujii@oss.nttdata.com> schrieb am Di., 28. Dez. 2021, 07:56:\n\nOn 2021/12/24 19:40, Tomas Vondra wrote:\n> Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.\n> \n> The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.\n\nSome users don't want to increase cache setting, do they? Because\n\n- They may expect that setval() affects all subsequent nextval(). But if cache is set to greater than one, the value set by setval() doesn't affect other backends until they consumed all the cached sequence values.\n- They may expect that the value returned from nextval() is basically increased monotonically. If cache is set to greater than one, subsequent nextval() can easily return smaller value than one returned by previous nextval().\n- They may want to avoid \"hole\" of a sequence as much as possible, e.g., as far as the server is running normally. If cache is set to greater than one, such \"hole\" can happen even thought the server doesn't crash yet.\n\n\n> FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.\n\nSounds great, thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 28 Dec 2021 09:57:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/22/21 18:50, Fujii Masao wrote:\n> \n> \n> On 2021/12/22 21:11, Tomas Vondra wrote:\n>> Interesting idea, but I think it has a couple of issues :-(\n> \n> Thanks for the review!\n> \n>> 1) We'd need to know the LSN of the last WAL record for any given\n>> sequence, and we'd need to communicate that between backends somehow.\n>> Which seems rather tricky to do without affecting performance.\n> \n> How about using the page lsn for the sequence? nextval_internal()\n> already uses that to check whether it's less than or equal to checkpoint\n> redo location.\n> \n\nI explored the idea of using page LSN a bit, and there's some good and\nbad news.\n\nThe patch from 22/12 simply checks if the change should/would wait for\nsync replica, and if yes it WAL-logs the sequence increment. There's a\ncouple problems with this, unfortunately:\n\n1) Imagine a high-concurrency environment, with a lot of sessions doing\nnextval('s') at the same time. One session WAL-logs the increment, but\nbefore the WAL gets flushed / sent to replica, another session calls\nnextval. SyncRepNeedsWait() says true, so it WAL-logs it again, moving\nthe page LSN forward. And so on. So in a high-concurrency environments,\nthis simply makes the matters worse - it causes an avalanche of WAL\nwrites instead of saving anything.\n\n(You don't even need multiple sessions - a single session calling\nnextval would have the same issue, WAL-logging every call.)\n\n\n2) It assumes having a synchronous replica, but that's wrong. It's\npartially my fault because I formulated this issue as if it was just\nabout sync replicas, but that's just one symptom. It applies even to\nsystems without any replicas.\n\nImagine you do\n\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,40);\n ROLLBACK;\n\n SELECT nextval('s');\n\nand then you murder the server by \"kill -9\". If you restart it and do a\nnextval('s') again, the value will likely go back, generating duplicate\nvalues :-(\n\n\nSo I think this approach is not really an improvement over WAL-logging\nevery increment. But there's a better way, I think - we don't need to\ngenerate WAL, we just need to ensure we wait for it to be flushed at\ntransaction end in RecordTransactionCommit().\n\nThat is, instead of generating more WAL, simply update XactLastRecEnd\nand then ensure RecordTransactionCommit flushes/waits etc. Attached is a\npatch doing that - the changes in sequence.c are trivial, changes in\nRecordTransactionCommit simply ensure we flush/wait even without XID\n(this actually raises some additional questions that I'll discuss in a\nseparate message in this thread).\n\nI repeated the benchmark measurements with nextval/insert workloads, to\ncompare this with the other patch (WAL-logging every increment). I had\nto use a different machine, so the the results are not directly\ncomparable to the numbers presented earlier.\n\nOn btrfs, it looks like this. The log-all is the first patch, page-lsn\nis the new patch using page LSN. The first columns are raw pgbench tps\nvalues, the last two columns are comparison to master.\n\nOn btrfs, it looks like this (the numbers next to nextval are the cache\nsize, with 1 being the default):\n\n client test master log-all page-lsn log-all page-lsn\n -------------------------------------------------------------------\n 1 insert 829 807 802 97% 97%\n nextval/1 16491 814 16465 5% 100%\n nextval/32 24487 16462 24632 67% 101%\n nextval/64 24516 24918 24671 102% 101%\n nextval/128 32337 33178 32863 103% 102%\n\n client test master log-all page-lsn log-all page-lsn\n -------------------------------------------------------------------\n 4 insert 1577 1590 1546 101% 98%\n nextval/1 45607 1579 21220 3% 47%\n nextval/32 68453 49141 51170 72% 75%\n nextval/64 66928 65534 66408 98% 99%\n nextval/128 83502 81835 82576 98% 99%\n\nThe results seem clearly better, I think.\n\nFor \"insert\" there's no drop at all (same as before), because as soon as\na transaction generates any WAL, it has to flush/wait anyway.\n\nAnd for \"nextval\" there's a drop, but only with 4 clients, and it's much\nsmaller (53% instead of 97%). And increasing the cache size eliminates\neven that.\n\nOut of curiosity I ran the tests on tmpfs too, which should show overhed\nnot related to I/O. The results are similar:\n\n client test master log-all page-lsn log-all page-lsn\n -------------------------------------------------------------------\n 1 insert 44033 43740 43215 99% 98%\n nextval/1 58640 48384 59243 83% 101%\n nextval/32 61089 60901 60830 100% 100%\n nextval/64 60412 61315 61550 101% 102%\n nextval/128 61436 61605 61503 100% 100%\n\n client test master log-all page-lsn log-all page-lsn\n -------------------------------------------------------------------\n 4 insert 88212 85731 87350 97% 99%\n nextval/1 115059 90644 113541 79% 99%\n nextval/32 119765 118115 118511 99% 99%\n nextval/64 119717 119220 118410 100% 99%\n nextval/128 120258 119448 118826 99% 99%\n\nSeems pretty nice, I guess. The original patch did pretty well too (only\nabout 20% drop).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 11 Jan 2022 17:07:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 12/24/21 11:40, Tomas Vondra wrote:\n> \n> ...\n> \n> FWIW I plan to explore the idea of looking at sequence page LSN, and\n> flushing up to that position.\n> \n\nSo, I explored the page LSN idea, and it seems to be working pretty\nnicely. There still is some impact on the workload doing just nextval\ncalls, but it's much better than WAL-logging everything. The patch is\navailable at [1].\n\nWhile working on that patch, I realized this actually affects even\nsystems without any replicas - it's trivial to get a sequence going\nbackwards. Imagine you do this:\n\n BEGIN;\n SELECT nextval('s') FROM generate_series(1,50) s(i);\n ROLLBACK;\n\n SELECT nextval('s');\n\n -- kill -9 postgres\n\nIt's pretty likely a nextval('s') after restarting the server returns a\nvalue from before the last nextval('s'), in case we do not manage to\nflush the WAL before the kill.\n\nThe patch deals with this by updating XactLastRecEnd and then flushing\nup to that point in RecordTransactionCommit(). But for that to happen,\nwe have to do the flush/wait even without a valid XID (which we may not\nhave when nextval gets called outside a transaction).\n\nSo I was wondering what other places do the same thing (generates WAL\nwithout setting a XID), because that might either have similar issues\nwith not flushing data, and/or be affected by this change.\n\nRecordTransactionCommit() says about such cases this:\n\n /*\n * Check if we want to commit asynchronously. We can allow the\n * XLOG flush to happen asynchronously if synchronous_commit=off,\n * or if the current transaction has not performed any WAL-logged\n * operation or didn't assign an xid. The transaction can end up\n * not writing any WAL, even if it has an xid, if it only wrote to\n * temporary and/or unlogged tables. It can end up having written\n * WAL without an xid if it did HOT pruning. In case of a crash,\n * the loss of such a transaction will be irrelevant; temp tables\n * will be lost anyway, unlogged tables will be truncated and HOT\n * pruning will be done again later. (Given the foregoing, you\n * might think that it would be unnecessary to emit the XLOG record\n * at all in this case, but we don't currently try to do that. It\n * would certainly cause problems at least in Hot Standby mode,\n * where the KnownAssignedXids machinery requires tracking every\n * XID assignment. It might be OK to skip it only when wal_level <\n * replica, but for now we don't.)\n *\n * However, if we're doing cleanup of any non-temp rels or\n * committing any command that wanted to force sync commit, then we\n * must flush XLOG immediately. (We must not allow asynchronous\n * commit if there are any non-temp tables to be deleted, because\n * we might delete the files before the COMMIT record is flushed to\n * disk. We do allow asynchronous commit if all to-be-deleted\n * tables are temporary though, since they are lost anyway if we\n * crash.)\n */\n\nNote: This relates only to XLogFlush vs. XLogSetAsyncXactLSN, not about\nwaiting for sync standby. For that we ignore forceSyncCommit, which\nseems a bit weird ...\n\nAnyway, I was wondering what happens in practice, so I added very simple\nlogging to RecordTransactionCommit():\n\n if (wrote_log && !markXidCommitted)\n elog(WARNING, \"not flushing at %X/%X\",\n (uint32) (XactLastRecEnd >> 32),\n (uint32) XactLastRecEnd);\n\nand then ran installcheck, which produces ~700 messages. Looking at the\nWAL (last few records before the LSN reported by the log message), most\nof this is related to HOT pruning (i.e. PRUNE), but there's plenty of\nother WAL records. And I'm not sure if it's OK to just lose (some of)\nthose messages, as the comment claims for PRUNE.\n\nIt's quite possible I miss something basic/important, and everything is\nfine and dandy, but here's a couple non-pruning examples - command\ntriggering the log message, along with the last few WAL records without\nXID assigned right before RecordTransactionCommit() was called.\n\nA more complete data set (full WAL dump, regression.diffs etc.) is\navailable at [2].\n\n========================================================================\n\nVACUUM ANALYZE num_exp_add;\n---------------------------\nVISIBLE cutoff xid 37114 flags 0x01, blkref #0: rel 1663/63341/3697 ...\nINPLACE off 39, blkref #0: rel 1663/63341/1259 blk 5\n\n\nSELECT proname, provolatile FROM pg_proc\n WHERE oid in ('functest_B_1'::regproc,\n 'functest_B_2'::regproc,\n 'functest_B_3'::regproc,\n 'functest_B_4'::regproc) ORDER BY proname;\n------------------------------------------------\nVACUUM nunused 223, blkref #0: rel 1663/63341/2608 blk 39\nVISIBLE cutoff xid 39928 flags 0x01, blkref #0: rel 1663/63341/2608 ...\nVACUUM nunused 6, blkref #0: rel 1663/63341/2608 blk 40\nMETA_CLEANUP last_cleanup_num_delpages 5, blkref #0: rel 1663/63341 ...\nMETA_CLEANUP last_cleanup_num_delpages 1, blkref #0: rel 1663/63341 ...\nINPLACE off 13, blkref #0: rel 1663/63341/1259 blk 4\nINPLACE off 14, blkref #0: rel 1663/63341/1259 blk 4\nINPLACE off 20, blkref #0: rel 1663/63341/1259 blk 8\nINVALIDATIONS ; inval msgs: catcache 53 catcache 52 catcache 53 ...\n\n\nEXPLAIN (COSTS OFF)\nSELECT t1.a, t1.c, t2.a, t2.c FROM plt1_adv t1 INNER JOIN plt2_adv\nt2 ON (t1.a = t2.a AND t1.c = t2.c) WHERE t1.c IN ('0003', '0004',\n'0005') AND t1.b < 10 ORDER BY t1.a;\n------------------------------------------------\nINPLACE off 11, blkref #0: rel 1663/63341/69386 blk 67\nINVALIDATIONS ; inval msgs: catcache 53 catcache 52 relcache 82777\n\n\nVACUUM FREEZE indtoasttest;\n---------------------------\nFREEZE_PAGE cutoff xid 47817 ntuples 4, blkref #0: rel 1663/63341 ...\nVISIBLE cutoff xid 47816 flags 0x03, blkref #0: rel 1663/63341/ ...\nINPLACE off 91, blkref #0: rel 1663/63341/69386 blk 37\n\n\nSELECT brin_summarize_range('brin_summarize_multi_idx', 2);\n-----------------------------------------------------------\nINSERT heapBlk 2 pagesPerRange 2 offnum 2, blkref #0: rel 1663/63 ...\nSAMEPAGE_UPDATE offnum 2, blkref #0: rel 1663/63341/73957 blk 2\n\n\nSELECT brin_desummarize_range('brinidx_multi', 0);\n---------------------------------------------------\nDESUMMARIZE pagesPerRange 1, heapBlk 0, page offset 9, blkref #0: ...\n\n\nselect gin_clean_pending_list('gin_test_idx')>10 as many;\n------------------------------------------------------------------------\nDELETE_LISTPAGE ndeleted: 16, blkref #0: rel 1663/63341/71933 blk ...\nDELETE_LISTPAGE ndeleted: 16, blkref #0: rel 1663/63341/71933 blk ...\nDELETE_LISTPAGE ndeleted: 11, blkref #0: rel 1663/63341/71933 blk ...\n\n\nVACUUM no_index_cleanup;\n------------------------------------------------------------------------\nMETA_CLEANUP last_cleanup_num_delpages 0, blkref #0: rel 1663/63341 ...\n\n========================================================================\n\nI wonder if all those cases are subject to the same \"we can lose those\nrecords\" just like PRUNE. I haven't expected to see e.g. the\nBRIN-related records, but I'm more skeptical about cases with multiple\nWAL records. Because how exactly we know we don't lose just some of\nthem? Those might go to two different WAL pages, and we manage to flush\njust one of them? What happens if we keep the INPLACE but lose the\nINVALIDATIONS message right after it? I'd bet that'll confuse the hell\nout of logical decoding, for example.\n\n\n[1]\nhttps://www.postgresql.org/message-id/9fb080d5-f509-cca4-1353-fd9da85db1d2%40enterprisedb.com\n\n[2]\nhttps://drive.google.com/drive/folders/1NEjWCG0uCWkrxrp_YZQOzqDfHlfJI8_l?usp=sharing\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 11 Jan 2022 19:47:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\n\nOn 2022/01/12 1:07, Tomas Vondra wrote:\n> I explored the idea of using page LSN a bit\n\nMany thanks!\n\n\n> The patch from 22/12 simply checks if the change should/would wait for\n> sync replica, and if yes it WAL-logs the sequence increment. There's a\n> couple problems with this, unfortunately:\n\nYes, you're right.\n\n\n> So I think this approach is not really an improvement over WAL-logging\n> every increment. But there's a better way, I think - we don't need to\n> generate WAL, we just need to ensure we wait for it to be flushed at\n> transaction end in RecordTransactionCommit().\n> \n> That is, instead of generating more WAL, simply update XactLastRecEnd\n> and then ensure RecordTransactionCommit flushes/waits etc. Attached is a\n> patch doing that - the changes in sequence.c are trivial, changes in\n> RecordTransactionCommit simply ensure we flush/wait even without XID\n> (this actually raises some additional questions that I'll discuss in a\n> separate message in this thread).\n\nThis approach (and also my previous proposal) seems to assume that the value returned from nextval() should not be used until the transaction executing that nextval() has been committed successfully. But I'm not sure how many applications follow this assumption. Some application might use the return value of nextval() instantly before issuing commit command. Some might use the return value of nextval() executed in rollbacked transaction.\n\nIf we want to avoid duplicate sequence value even in those cases, ISTM that the transaction needs to wait for WAL flush and sync rep before nextval() returns the value. Of course, this might cause other issues like performance decrease, though.\n\n\n> On btrfs, it looks like this (the numbers next to nextval are the cache\n> size, with 1 being the default):\n> \n> client test master log-all page-lsn log-all page-lsn\n> -------------------------------------------------------------------\n> 1 insert 829 807 802 97% 97%\n> nextval/1 16491 814 16465 5% 100%\n> nextval/32 24487 16462 24632 67% 101%\n> nextval/64 24516 24918 24671 102% 101%\n> nextval/128 32337 33178 32863 103% 102%\n> \n> client test master log-all page-lsn log-all page-lsn\n> -------------------------------------------------------------------\n> 4 insert 1577 1590 1546 101% 98%\n> nextval/1 45607 1579 21220 3% 47%\n> nextval/32 68453 49141 51170 72% 75%\n> nextval/64 66928 65534 66408 98% 99%\n> nextval/128 83502 81835 82576 98% 99%\n> \n> The results seem clearly better, I think.\n\nThanks for benchmarking this! I agree that page-lsn is obviously better than log-all.\n\n\n> For \"insert\" there's no drop at all (same as before), because as soon as\n> a transaction generates any WAL, it has to flush/wait anyway.\n> \n> And for \"nextval\" there's a drop, but only with 4 clients, and it's much\n> smaller (53% instead of 97%). And increasing the cache size eliminates\n> even that.\n\nYes, but 53% drop would be critial for some applications that don't want to increase the cache size for some reasons. So IMO it's better to provide the option to enable/disable that page-lsn approach.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 15 Jan 2022 14:12:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 1/15/22 06:12, Fujii Masao wrote:\n> \n> \n> On 2022/01/12 1:07, Tomas Vondra wrote:\n>> I explored the idea of using page LSN a bit\n> \n> Many thanks!\n> \n> \n>> The patch from 22/12 simply checks if the change should/would wait for\n>> sync replica, and if yes it WAL-logs the sequence increment. There's a\n>> couple problems with this, unfortunately:\n> \n> Yes, you're right.\n> \n> \n>> So I think this approach is not really an improvement over WAL-logging\n>> every increment. But there's a better way, I think - we don't need to\n>> generate WAL, we just need to ensure we wait for it to be flushed at\n>> transaction end in RecordTransactionCommit().\n>>\n>> That is, instead of generating more WAL, simply update XactLastRecEnd\n>> and then ensure RecordTransactionCommit flushes/waits etc. Attached is a\n>> patch doing that - the changes in sequence.c are trivial, changes in\n>> RecordTransactionCommit simply ensure we flush/wait even without XID\n>> (this actually raises some additional questions that I'll discuss in a\n>> separate message in this thread).\n> \n> This approach (and also my previous proposal) seems to assume that the \n> value returned from nextval() should not be used until the transaction \n> executing that nextval() has been committed successfully. But I'm not \n> sure how many applications follow this assumption. Some application \n> might use the return value of nextval() instantly before issuing commit \n> command. Some might use the return value of nextval() executed in \n> rollbacked transaction.\n> \n\nIMO any application that assumes data from uncommitted transactions is \noutright broken and we should not try to fix that because it's quite \nfutile (and likely will affect well-behaving applications).\n\nThe issue I'm trying to fix in this thread is much narrower - we don't \nactually meet the guarantees for committed transactions (that only did \nnextval without generating any WAL).\n\n> If we want to avoid duplicate sequence value even in those cases, ISTM \n> that the transaction needs to wait for WAL flush and sync rep before \n> nextval() returns the value. Of course, this might cause other issues \n> like performance decrease, though.\n> \n\nRight, something like that. But that'd hurt well-behaving applications, \nbecause by doing the wait earlier (in nextval, not at commit) it \nincreases the probability of waiting.\n\nFWIW I'm not against improvements in this direction, but it goes way \nbeyong fixing the original issue.\n\n> \n>> On btrfs, it looks like this (the numbers next to nextval are the cache\n>> size, with 1 being the default):\n>>\n>>    client  test         master   log-all  page-lsn   log-all  page-lsn\n>>    -------------------------------------------------------------------\n>>         1  insert          829       807       802       97%       97%\n>>            nextval/1     16491       814     16465        5%      100%\n>>            nextval/32    24487     16462     24632       67%      101%\n>>            nextval/64    24516     24918     24671      102%      101%\n>>            nextval/128   32337     33178     32863      103%      102%\n>>\n>>    client  test         master   log-all  page-lsn   log-all  page-lsn\n>>    -------------------------------------------------------------------\n>>         4  insert         1577      1590      1546      101%       98%\n>>            nextval/1     45607      1579     21220        3%       47%\n>>            nextval/32    68453     49141     51170       72%       75%\n>>            nextval/64    66928     65534     66408       98%       99%\n>>            nextval/128   83502     81835     82576       98%       99%\n>>\n>> The results seem clearly better, I think.\n> \n> Thanks for benchmarking this! I agree that page-lsn is obviously better \n> than log-all.\n> \n> \n>> For \"insert\" there's no drop at all (same as before), because as soon as\n>> a transaction generates any WAL, it has to flush/wait anyway.\n>>\n>> And for \"nextval\" there's a drop, but only with 4 clients, and it's much\n>> smaller (53% instead of 97%). And increasing the cache size eliminates\n>> even that.\n> \n> Yes, but 53% drop would be critial for some applications that don't want \n> to increase the cache size for some reasons. So IMO it's better to \n> provide the option to enable/disable that page-lsn approach.\n> \n\nI disagree. This drop applies only to extremely simple transactions - \nonce the transaction does any WAL write, it disappears. Even if the \ntransaction does only a couple reads, it'll go away. I find it hard to \nbelieve there's any serious application doing this.\n\nSo I think we should get it reliable (to not lose data after commit) \nfirst and then maybe see if we can improve this.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 15 Jan 2022 23:57:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "\nOn 15.01.22 23:57, Tomas Vondra wrote:\n>> This approach (and also my previous proposal) seems to assume that the \n>> value returned from nextval() should not be used until the transaction \n>> executing that nextval() has been committed successfully. But I'm not \n>> sure how many applications follow this assumption. Some application \n>> might use the return value of nextval() instantly before issuing \n>> commit command. Some might use the return value of nextval() executed \n>> in rollbacked transaction.\n>>\n> \n> IMO any application that assumes data from uncommitted transactions is \n> outright broken and we should not try to fix that because it's quite \n> futile (and likely will affect well-behaving applications).\n> \n> The issue I'm trying to fix in this thread is much narrower - we don't \n> actually meet the guarantees for committed transactions (that only did \n> nextval without generating any WAL).\n\nThe wording in the SQL standard is:\n\n\"Changes to the current base value of a sequence generator are not \ncontrolled by SQL-transactions; therefore, commits and rollbacks of \nSQL-transactions have no effect on the current base value of a sequence \ngenerator.\"\n\nThis implies the well-known behavior that consuming a sequence value is \nnot rolled back. But it also appears to imply that committing a \ntransaction has no impact on the validity of a sequence value produced \nduring that transaction. In other words, this appears to imply that \nmaking use of a sequence value produced in a rolled-back transaction is \nvalid.\n\nA very strict reading of this would seem to imply that every single \nnextval() call needs to be flushed to WAL immediately, which is of \ncourse impractical.\n\n\n", "msg_date": "Tue, 25 Jan 2022 10:18:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: sequences vs. synchronous replication" }, { "msg_contents": "On 1/25/22 10:18, Peter Eisentraut wrote:\n> \n> On 15.01.22 23:57, Tomas Vondra wrote:\n>>> This approach (and also my previous proposal) seems to assume that \n>>> the value returned from nextval() should not be used until the \n>>> transaction executing that nextval() has been committed successfully. \n>>> But I'm not sure how many applications follow this assumption. Some \n>>> application might use the return value of nextval() instantly before \n>>> issuing commit command. Some might use the return value of nextval() \n>>> executed in rollbacked transaction.\n>>>\n>>\n>> IMO any application that assumes data from uncommitted transactions is \n>> outright broken and we should not try to fix that because it's quite \n>> futile (and likely will affect well-behaving applications).\n>>\n>> The issue I'm trying to fix in this thread is much narrower - we don't \n>> actually meet the guarantees for committed transactions (that only did \n>> nextval without generating any WAL).\n> \n> The wording in the SQL standard is:\n> \n> \"Changes to the current base value of a sequence generator are not \n> controlled by SQL-transactions; therefore, commits and rollbacks of \n> SQL-transactions have no effect on the current base value of a sequence \n> generator.\"\n> \n> This implies the well-known behavior that consuming a sequence value is \n> not rolled back.  But it also appears to imply that committing a \n> transaction has no impact on the validity of a sequence value produced \n> during that transaction.  In other words, this appears to imply that \n> making use of a sequence value produced in a rolled-back transaction is \n> valid.\n> \n> A very strict reading of this would seem to imply that every single \n> nextval() call needs to be flushed to WAL immediately, which is of \n> course impractical.\n\nI'm not an expert in reading standards, but I'd not interpret it that \nway. I think it simply says the sequence must not go back, no matter \nwhat happened to the transaction.\n\nIMO interpreting this as \"must not lose any increments from uncommitted \ntransactions\" is maybe a bit too strict, and as you point out it's also \nimpractical because it'd mean calling nextval() repeatedly flushes WAL \nall the time. Not great for batch loads, for example.\n\nI don't think we need to flush WAL for every nextval() call, if we don't \nwrite WAL for every increment - I think we still can batch WAL for 32 \nincrements just like we do now (AFAICS that'd not contradict even this \nquite strict interpretation of the standard).\n\nOTOH the flush would have to happen immediately, we can't delay that \nuntil the end of the transaction. Which is going to affect even cases \nthat generate WAL for other reasons (e.g. doing insert), which was \nentirely unaffected by the previous patches.\n\nAnd the flush would have to happen even for sessions that didn't write \nWAL (which was what started this thread) - we could use page LSN and \nflush only to that (so we'd flush once and then it'd be noop until the \nsequence increments 32-times and writes another WAL record).\n\nOf course, it's not enough to just flush WAL, we have to wait for the \nsync replica too :-(\n\nI don't have any benchmark results quantifying this yet, but I'll do \nsome tests in the next day or two. But my expectation is this is going \nto be pretty expensive, and considering how concerned we were about \naffecting current workloads, making the impact worse seems wrong.\n\n\nMy opinion is we should focus on fixing this given the current (weaker) \ninterpretation of the standard, i.e. accepting the loss of increments \nobserved only by uncommitted transactions. The page LSN patch seems like \nthe best way to do that so far.\n\n\nWe may try reworking this to provide the stronger guarantees (i.e. not \nlosing even increments from uncommitted transactions) in the future, of \ncourse. But considering (a) we're not sure that's really what the SQL \nstandard requires, (b) no one complained about that in years, and (c) \nit's going to make sequences way more expensive, I doubt that's really \ndesirable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Jan 2022 01:13:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequences vs. synchronous replication" } ]
[ { "msg_contents": "Hi all,\n(Added Georgios in CC.)\n\nWhen working on the support of LZ4 for pg_receivewal, walmethods.c has\ngained an extra parameter for the compression method. It gets used on\nthe DirectoryMethod instead of the compression level to decide which\ntype of compression is used. One thing that I left out during this\nprevious work is that the TarMethod also gained knowledge of this\ncompression method, but we still use the compression level to check if\ntars should be compressed or not.\n\nThis is wrong on multiple aspects. First, this is not consistent with\nthe directory method, making walmethods.c harder to figure out.\nSecond, this is not extensible if we want to introduce more\ncompression methods in pg_basebackup, like LZ4. This reflects on the\noptions used by pg_receivewal and pg_basebackup, that are not\ninconsistent as well.\n\nThe attached patch refactors the code of pg_basebackup and the\nTarMethod of walmethods.c to use the compression method where it\nshould, splitting entirely the logic related the compression level.\n\nThis is one step toward the introduction of LZ4 in pg_basebackup, but\nthis refactoring is worth doing on its own, hence a separate thread to\ndeal with this problem first. The options of pg_basebackup are\nreworked to be consistent with pg_receivewal, as follows:\n- --compress ranges now from 1 to 9, instead of 0 to 9.\n- --compression-method={none,gzip} is added, the default is none, same\nas HEAD.\n- --gzip/-z has the same meaning as before, being just a synonym of\n--compression-method=gzip with the default compression level of ZLIB\nassigned if there is no --compress.\n\nOne more thing that I have noticed while hacking this stuff is that we\nhave no regression tests for gzip with pg_basebackup, so I have added\nsome that are skipped when not compiling the code with ZLIB.\n\nOpinions?\n--\nMichael", "msg_date": "Sat, 18 Dec 2021 20:29:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Sat, Dec 18, 2021 at 6:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n> This is one step toward the introduction of LZ4 in pg_basebackup, but\n> this refactoring is worth doing on its own, hence a separate thread to\n> deal with this problem first. The options of pg_basebackup are\n> reworked to be consistent with pg_receivewal, as follows:\n> - --compress ranges now from 1 to 9, instead of 0 to 9.\n> - --compression-method={none,gzip} is added, the default is none, same\n> as HEAD.\n> - --gzip/-z has the same meaning as before, being just a synonym of\n> --compression-method=gzip with the default compression level of ZLIB\n> assigned if there is no --compress.\n\nOne thing we should keep in mind is that I'm also working on adding\nserver-side compression, initially with gzip, but Jeevan Ladhe has\nposted patches to extend that to LZ4. So however we structure the\noptions they should take that into account. Currently that patch set\nadds --server-compression={none,gzip,gzipN} where N from 1 to 9, but\nperhaps it should be done differently. Not sure.\n\n> One more thing that I have noticed while hacking this stuff is that we\n> have no regression tests for gzip with pg_basebackup, so I have added\n> some that are skipped when not compiling the code with ZLIB.\n\nIf they don't decompress the backup and run pg_verifybackup on it then\nI'm not sure how much they help. Yet, I don't know how to do that\nportably.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:19:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Dec 20, 2021 at 10:19:44AM -0500, Robert Haas wrote:\n> One thing we should keep in mind is that I'm also working on adding\n> server-side compression, initially with gzip, but Jeevan Ladhe has\n> posted patches to extend that to LZ4. So however we structure the\n> options they should take that into account. Currently that patch set\n> adds --server-compression={none,gzip,gzipN} where N from 1 to 9, but\n> perhaps it should be done differently. Not sure.\n\nYeah, consistency would be good. For the client-side compression of\nLZ4, we have shaped things around the existing --compress option, and\nthere is 6f164e6 that offers an API to parse that at option-level,\nmeaning less custom error strings. I'd like to think that splitting\nthe compression level and the compression method is still the right\nchoice, except if --server-compression combined with a client-side\ncompression is a legal combination. This would not really make sense,\nthough, no? So we'd better block this possibility from the start?\n\n>> One more thing that I have noticed while hacking this stuff is that we\n>> have no regression tests for gzip with pg_basebackup, so I have added\n>> some that are skipped when not compiling the code with ZLIB.\n> \n> If they don't decompress the backup and run pg_verifybackup on it then\n> I'm not sure how much they help. Yet, I don't know how to do that\n> portably.\n\nThey help in checking that an environment does not use a buggy set of\nGZIP, at least. Using pg_verifybackup on a base backup with\n--format='t' could be tweaked with $ENV{TAR} for the tarballs\ngeneration, for example, as we do in some other tests. Option sets\nlike \"xvf\" or \"zxvf\" should be rather portable across the buildfarm,\nno? I'd like to think that this is not a requirement for adding\nchecks in the compression path, as a first step, though, but I agree\nthat it could be extended more.\n--\nMichael", "msg_date": "Tue, 21 Dec 2021 09:43:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "\nHi,\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Saturday, December 18th, 2021 at 12:29 PM, Michael Paquier\n<michael@paquier.xyz> wrote:\n>Hi all,\n>(Added Georgios in CC.)\n\nthank you for the shout out.\n\n>When working on the support of LZ4 for pg_receivewal, walmethods.c has\n>gained an extra parameter for the compression method. It gets used on\n>the DirectoryMethod instead of the compression level to decide which\n>type of compression is used. One thing that I left out during this\n>previous work is that the TarMethod also gained knowledge of this\n>compression method, but we still use the compression level to check if\n>tars should be compressed or not.\n>\n>This is wrong on multiple aspects. First, this is not consistent with\n>the directory method, making walmethods.c harder to figure out.\n>Second, this is not extensible if we want to introduce more\n>compression methods in pg_basebackup, like LZ4. This reflects on the\n>options used by pg_receivewal and pg_basebackup, that are not\n>inconsistent as well.\n\nAgreed with all the above.\n\n>The attached patch refactors the code of pg_basebackup and the\n>TarMethod of walmethods.c to use the compression method where it\n>should, splitting entirely the logic related the compression level.\n\nThanks.\n\n>This is one step toward the introduction of LZ4 in pg_basebackup, but\n>this refactoring is worth doing on its own, hence a separate thread to\n>deal with this problem first. The options of pg_basebackup are\n>reworked to be consistent with pg_receivewal, as follows:\n>- --compress ranges now from 1 to 9, instead of 0 to 9.\n>- --compression-method={none,gzip} is added, the default is none, same\n>as HEAD.\n>- --gzip/-z has the same meaning as before, being just a synonym of\n>--compression-method=gzip with the default compression level of ZLIB\n>assigned if there is no --compress.\n\nIndeed this is consistent with pg_receivewal. It gets my +1.\n\n>One more thing that I have noticed while hacking this stuff is that we\n>have no regression tests for gzip with pg_basebackup, so I have added\n>some that are skipped when not compiling the code with ZLIB.\n\nAs far as the code is concerned, I have a minor nitpick.\n\n+ if (compression_method == COMPRESSION_NONE)\n+ streamer = bbstreamer_plain_writer_new(archive_filename,\n+ archive_file);\n #ifdef HAVE_LIBZ\n- if (compresslevel != 0)\n+ else if (compression_method == COMPRESSION_GZIP)\n {\n strlcat(archive_filename, \".gz\", sizeof(archive_filename));\n streamer = bbstreamer_gzip_writer_new(archive_filename,\n archive_file,\n compresslevel);\n }\n- else\n #endif\n- streamer = bbstreamer_plain_writer_new(archive_filename,\n- archive_file);\n-\n+ else\n+ {\n+ Assert(false); /* not reachable */\n+ }\n\nThe above block moves the initialization of 'streamer' within two conditional\nblocks. Despite this being correct, it is possible that some compilers will\ncomplain for lack of initialization of 'streamer' when it is eventually used a\nbit further ahead in:\n if (must_parse_archive)\n streamer = bbstreamer_tar_archiver_new(streamer);\n\nI propose to initialize streamer to NULL when declared at the top of\nCreateBackupStreamer().\n\nAs far as the tests are concerned, I think that 2 too many tests are skipped\nwhen HAVE_LIBZ is not defined to be 1. The patch reads:\n\n+Check ZLIB compression if available.\n+SKIP:\n+{\n+ skip \"postgres was not built with ZLIB support\", 5\n+ if (!check_pg_config(\"#define HAVE_LIBZ 1\"));\n+\n+ $node->command_ok(\n+ [\n+ 'pg_basebackup', '-D',\n+ \"$tempdir/backup_gzip\", '--compression-method',\n+ 'gzip', '--compress', '1', '--no-sync', '--format', 't'\n+ ],\n+ 'pg_basebackup with --compress and --compression-method=gzip');\n+\n+ # Verify that the stored files are generated with their expected\n+ # names.\n+ my @zlib_files = glob \"$tempdir/backup_gzip/*.tar.gz\";\n+ is(scalar(@zlib_files), 2,\n+ \"two files created with gzip (base.tar.gz and pg_wal.tar.gz)\");\n+\n+ # Check the integrity of the files generated.\n+ my $gzip = $ENV{GZIP_PROGRAM};\n+ skip \"program gzip is not found in your system\", 1\n+ if ( !defined $gzip\n+ || $gzip eq ''\n+ || system_log($gzip, '--version') != 0);\n+\n+ my $gzip_is_valid = system_log($gzip, '--test', @zlib_files);\n+ is($gzip_is_valid, 0, \"gzip verified the integrity of compressed data\");\n+ rmtree(\"$tempdir/backup_gzip\");\n+}\n\nYou can see that after the check_pg_config() test, only 3 tests follow,\nnamely:\n * $node->command_ok()\n * is(scalar(), ...)\n * is($gzip_is_valid, ...)\n\n>Opinions?\n\nOther than the minor issues above, I think this is a solid improvement. +1\n\n>--\n>Michael\n\nCheers,\n//Georgios\n\n\n", "msg_date": "Mon, 03 Jan 2022 15:35:57 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "(Added Jeevan in CC, for awareness)\n\nOn Mon, Jan 03, 2022 at 03:35:57PM +0000, gkokolatos@pm.me wrote:\n> I propose to initialize streamer to NULL when declared at the top of\n> CreateBackupStreamer().\n\nYes, that may be noisy. Done this way.\n\n> You can see that after the check_pg_config() test, only 3 tests follow,\n> namely:\n> * $node->command_ok()\n> * is(scalar(), ...)\n> * is($gzip_is_valid, ...)\n\nIndeed. The CF bot was complaining about that, actually.\n\nThinking more about this stuff, pg_basebackup --compress is an option\nthat exists already for a couple of years, and that's independent of\nthe backend-side compression that Robert and Jeevan are working on, so\nI'd like to move on this code cleanup. We can always figure out the\nLZ4 part for pg_basebackup after, if necessary.\n\nAttached is an updated patch. The CF bot should improve with that.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 17:00:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wednesday, January 5th, 2022 at 9:00 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jan 03, 2022 at 03:35:57PM +0000, gkokolatos@pm.me wrote:\n> > I propose to initialize streamer to NULL when declared at the top of\n> > CreateBackupStreamer().\n>\n> Yes, that may be noisy. Done this way.\n\nGreat.\n\n> > You can see that after the check_pg_config() test, only 3 tests follow,\n> > namely:\n> > * $node->command_ok()\n> > * is(scalar(), ...)\n> > * is($gzip_is_valid, ...)\n>\n> Indeed. The CF bot was complaining about that, actually.\n\nGreat.\n\n> Thinking more about this stuff, pg_basebackup --compress is an option\n> that exists already for a couple of years, and that's independent of\n> the backend-side compression that Robert and Jeevan are working on, so\n> I'd like to move on this code cleanup. We can always figure out the\n> LZ4 part for pg_basebackup after, if necessary.\n\nI agree that the cleanup in itself is helpful. It feels awkward to have two\nutilities under the same path, with distinct options for the same\nfunctionality.\n\nWhen the backend-side compression is completed, were there really be a need for\nclient-side compression? If yes, then it seems logical to have distinct options\nfor them and this cleanup makes sense. If not, then it seems logical to maintain\nthe current options list and 'simply' change the internals of the code, and this\ncleanup makes sense.\n\n> Attached is an updated patch. The CF bot should improve with that.\n\n+1\n\n> --\n> Michael\n\nCheers,\n//Georgios\n\n\n", "msg_date": "Wed, 05 Jan 2022 09:17:28 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Dec 20, 2021 at 7:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yeah, consistency would be good. For the client-side compression of\n> LZ4, we have shaped things around the existing --compress option, and\n> there is 6f164e6 that offers an API to parse that at option-level,\n> meaning less custom error strings. I'd like to think that splitting\n> the compression level and the compression method is still the right\n> choice, except if --server-compression combined with a client-side\n> compression is a legal combination. This would not really make sense,\n> though, no? So we'd better block this possibility from the start?\n\nRight. It's blocked right now, but Tushar noticed on the other thread\nthat the error message isn't as good as it could be, so I'll improve\nthat a bit. Still the issue wasn't overlooked.\n\n> > If they don't decompress the backup and run pg_verifybackup on it then\n> > I'm not sure how much they help. Yet, I don't know how to do that\n> > portably.\n>\n> They help in checking that an environment does not use a buggy set of\n> GZIP, at least. Using pg_verifybackup on a base backup with\n> --format='t' could be tweaked with $ENV{TAR} for the tarballs\n> generation, for example, as we do in some other tests. Option sets\n> like \"xvf\" or \"zxvf\" should be rather portable across the buildfarm,\n> no? I'd like to think that this is not a requirement for adding\n> checks in the compression path, as a first step, though, but I agree\n> that it could be extended more.\n\nOh, well, if we have a working tar available, then it's not so bad. I\nwas thinking we couldn't really count on that, especially on Windows.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jan 2022 10:13:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Oh, well, if we have a working tar available, then it's not so bad. I\n> was thinking we couldn't really count on that, especially on Windows.\n\nI think the existing precedent is to skip the test if tar isn't there,\ncf pg_basebackup/t/010_pg_basebackup.pl. But certainly the majority of\nbuildfarm animals have it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Jan 2022 10:22:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wed, Jan 5, 2022 at 4:17 AM <gkokolatos@pm.me> wrote:\n> When the backend-side compression is completed, were there really be a need for\n> client-side compression? If yes, then it seems logical to have distinct options\n> for them and this cleanup makes sense. If not, then it seems logical to maintain\n> the current options list and 'simply' change the internals of the code, and this\n> cleanup makes sense.\n\nI think we're going to want to offer both options. We can't know\nwhether the user prefers to consume CPU cycles on the server or on the\nclient. Compressing on the server has the advantage of potentially\nsaving transfer bandwidth, but the server is also often the busiest\npart of the whole system, and users are often keen to offload as much\nwork as possible.\n\nGiven that, I'd like us to be thinking about what the full set of\noptions looks like once we have (1) compression on either the server\nor the client and (2) multiple compression algorithms and (3) multiple\ncompression levels. Personally, I don't really like the decision made\nby this proposed patch. In this patch's view of the world, -Z is a way\nof providing the compression level for whatever compression algorithm\nyou happen to have selected, but I think of -Z as being the upper-case\nversion of -z which I think of as selecting specifically gzip. It's\nnot particularly intuitive to me that in a command like pg_basebackup\n--compress=<something>, <something> is a compression level rather than\nan algorithm. So what I would propose is probably something like:\n\npg_basebackup --compress=ALGORITHM [--compression-level=NUMBER]\npg_basebackup --server-compress=ALGORITHM [--compression-level=NUMBER]\n\nAnd then make -z short for --compress=gzip and -Z <n> short for\n--compress=gzip --compression-level=<n>. That would be a\nbackward-incompatible change to the definition of --compress, but as\nlong as -Z <n> works the same as today, I don't think many people will\nnotice. If we like, we can notice if the argument to --compress is an\ninteger and suggest using either -Z or --compress=gzip\n--compression-level=<n> instead.\n\nIn the proposed patch, you end up with pg_basebackup\n--compression-method=lz4 -Z2 meaning compression with lz4 level 2. I\nfind that quite odd, though as with all such things, opinions may\nvary. In my proposal, that would be an error, because it would be\nequivalent to --compress=lz4 --compress=gzip --compression-level=2,\nand would thus involve conflicting compression method specifications.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jan 2022 10:33:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wed, Jan 05, 2022 at 10:33:38AM -0500, Robert Haas wrote:\n> I think we're going to want to offer both options. We can't know\n> whether the user prefers to consume CPU cycles on the server or on the\n> client. Compressing on the server has the advantage of potentially\n> saving transfer bandwidth, but the server is also often the busiest\n> part of the whole system, and users are often keen to offload as much\n> work as possible.\n\nYeah. There are cases for both. I just got to wonder whether it\nmakes sense to allow both server-side and client-side compression to\nbe used at the same time. That would be a rather strange case, but\nwell, with the correct set of options that could be possible.\n\n> Given that, I'd like us to be thinking about what the full set of\n> options looks like once we have (1) compression on either the server\n> or the client and (2) multiple compression algorithms and (3) multiple\n> compression levels. Personally, I don't really like the decision made\n> by this proposed patch. In this patch's view of the world, -Z is a way\n> of providing the compression level for whatever compression algorithm\n> you happen to have selected, but I think of -Z as being the upper-case\n> version of -z which I think of as selecting specifically gzip. It's\n> not particularly intuitive to me that in a command like pg_basebackup\n> --compress=<something>, <something> is a compression level rather than\n> an algorithm. So what I would propose is probably something like:\n> \n> pg_basebackup --compress=ALGORITHM [--compression-level=NUMBER]\n> pg_basebackup --server-compress=ALGORITHM [--compression-level=NUMBER]\n>\n> And then make -z short for --compress=gzip and -Z <n> short for\n> --compress=gzip --compression-level=<n>. That would be a\n> backward-incompatible change to the definition of --compress, but as\n> long as -Z <n> works the same as today, I don't think many people will\n> notice. If we like, we can notice if the argument to --compress is an\n> integer and suggest using either -Z or --compress=gzip\n> --compression-level=<n> instead.\n\nMy view of things is slightly different, aka I'd rather keep\n--compress to mean a compression level with an integer option, but\nintroduce a --compression-method={lz4,gzip,none}, with -Z being a\nsynonym of --compression-method=gzip. That's at least the path we\nchose for pg_receivewal. I don't mind sticking with one way or\nanother, as what you are proposing is basically the same thing I have\nin mind, but both tools ought to use the same set of options.\n\nHmm. Perhaps at the end the problem is with --compress, where we\ndon't know if it means a compression level or a compression method?\nFor me, --compress means the former, and for you the latter. So a\nthird way of seeing things is to drop completely --compress, but have\none --compression-method and one --compression-level. That would\nbring a clear split. Or just one --compression-method for the\nclient-side compression as you are proposing for the server-side\ncompression, however I'd like to think that a split between the method\nand level is more intuitive.\n\n> In the proposed patch, you end up with pg_basebackup\n> --compression-method=lz4 -Z2 meaning compression with lz4 level 2. I\n> find that quite odd, though as with all such things, opinions may\n> vary. In my proposal, that would be an error, because it would be\n> equivalent to --compress=lz4 --compress=gzip --compression-level=2,\n> and would thus involve conflicting compression method specifications.\n\nIt seems to me that you did not read the patch closely enough. The\nattached patch does not add support for LZ4 in pg_basebackup on the\nclient-side yet. Once it gets added, though, the idea is that using\n--compress with LZ4 would result in an error. That's what happens\nwith pg_receivewal on HEAD, for one. The patch just shapes things to\nplug LZ4 more easily in the existing code of pg_basebackup.c, and\nwalmethods.c.\n\nSo.. As of now, it is actually possible to cut the pie in three\nparts. There are no real objections to the cleanup of walmethods.c\nand the addition of some conditional TAP tests with pg_basebackup and \nclient-side compression, as far as I can see, only to the option\nrenaming part. Attached are two patches, then. 0001 is the cleanup\nof walmethods.c to rely the compression method, with more tests (tests\nthat could be moved into their own patch, as well). 0002 is the\naddition of the options I suggested upthread, but we may change that \ndepending on what gets used for the server-side compression for\nconsistency so I am not suggesting to merge that until we agree on the\nfull picture. The point of this thread was mostly about 0001, so I am\nfine to discard 0002. Thoughts?\n--\nMichael", "msg_date": "Thu, 6 Jan 2022 14:04:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wed, Jan 05, 2022 at 10:22:06AM -0500, Tom Lane wrote:\n> I think the existing precedent is to skip the test if tar isn't there,\n> cf pg_basebackup/t/010_pg_basebackup.pl. But certainly the majority of\n> buildfarm animals have it.\n\nEven Windows environments should be fine, aka recent edc2332.\n--\nMichael", "msg_date": "Thu, 6 Jan 2022 14:21:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 6, 2022 at 12:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Yeah. There are cases for both. I just got to wonder whether it\n> makes sense to allow both server-side and client-side compression to\n> be used at the same time. That would be a rather strange case, but\n> well, with the correct set of options that could be possible.\n\nI don't think it makes sense to support that. On the one hand, it\ndoesn't seem useful: compressing already-compressed data doesn't\nusually work very well. Alternatively, I suppose the intent could be\nto compress one way for transfer and then decompress and recompress\nfor storage, but that seems too inefficient to take seriously. On the\nother hand, it requires a more complex user interface, and it's\nalready fairly complicated anyway.\n\n> My view of things is slightly different, aka I'd rather keep\n> --compress to mean a compression level with an integer option, but\n> introduce a --compression-method={lz4,gzip,none}, with -Z being a\n> synonym of --compression-method=gzip. That's at least the path we\n> chose for pg_receivewal. I don't mind sticking with one way or\n> another, as what you are proposing is basically the same thing I have\n> in mind, but both tools ought to use the same set of options.\n\nDid you mean that -z would be a synonym for --compression-method=gzip?\nIt doesn't really make sense for -Z to be that, unless it's also\nsetting the compression level.\n\nMy objection to --compress=$LEVEL is that the compression level seems\nlike it ought to rightfully be subordinate to the choice of algorithm.\nIn general, there's no reason why a compression algorithm has to offer\na choice of compression levels at all, or why they have to be numbered\n0 through 9. For example, lz4 on my system claims to offer compression\nlevels from 1 through 12, plus a separate set of \"fast\" compression\nlevels starting with 1 and going up to an unspecified number. And then\nit also has options to favor decompression speed, change the block\nsize, and a few other parameters. We don't necessarily want to expose\nall of those options, but we should structure things so that we could\nif it became important. The way to do that is to make the compression\nalgorithm the primary setting, and then anything else you can set for\nthat compressor is somehow a subordinate setting.\n\nPut another way, we don't decide first that we want to compress with\nlevel 7, and then afterward decide whether that's gzip, lz4, or bzip2.\nWe pick the compressor first, and then MAYBE think about changing the\ncompression level.\n\n> > In the proposed patch, you end up with pg_basebackup\n> > --compression-method=lz4 -Z2 meaning compression with lz4 level 2. I\n> > find that quite odd, though as with all such things, opinions may\n> > vary. In my proposal, that would be an error, because it would be\n> > equivalent to --compress=lz4 --compress=gzip --compression-level=2,\n> > and would thus involve conflicting compression method specifications.\n>\n> It seems to me that you did not read the patch closely enough. The\n> attached patch does not add support for LZ4 in pg_basebackup on the\n> client-side yet. Once it gets added, though, the idea is that using\n> --compress with LZ4 would result in an error. That's what happens\n> with pg_receivewal on HEAD, for one. The patch just shapes things to\n> plug LZ4 more easily in the existing code of pg_basebackup.c, and\n> walmethods.c.\n\nWell what I was looking at was this:\n\n- printf(_(\" -Z, --compress=0-9 compress tar output with given\ncompression level\\n\"));\n+ printf(_(\" -Z, --compress=1-9 compress tar output with given\ncompression level\\n\"));\n+ printf(_(\" --compression-method=METHOD\\n\"\n+ \" method to compress data\\n\"));\n\nThat seems to show that, post-patch, the argument to -Z would be a\ncompression level, even if --compression-method were something other\nthan gzip.\n\nIt's possible that I haven't read something carefully enough, but to\nme, what I said seems to be a straightforward conclusion based on\nlooking at the usage help in the patch. So if I came to the wrong\nconclusion, perhaps that usage help isn't reflecting the situation you\nintend to create, or not as clearly as it ought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 09:27:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 06, 2022 at 09:27:19AM -0500, Robert Haas wrote:\n> Did you mean that -z would be a synonym for --compression-method=gzip?\n> It doesn't really make sense for -Z to be that, unless it's also\n> setting the compression level.\n\nYes, I meant \"-z\", not \"-Z\", to be a synonym of\n--compression-method=gzip. Sorry for the typo.\n\n> My objection to --compress=$LEVEL is that the compression level seems\n> like it ought to rightfully be subordinate to the choice of algorithm.\n> In general, there's no reason why a compression algorithm has to offer\n> a choice of compression levels at all, or why they have to be numbered\n> 0 through 9. For example, lz4 on my system claims to offer compression\n> levels from 1 through 12, plus a separate set of \"fast\" compression\n> levels starting with 1 and going up to an unspecified number. And then\n> it also has options to favor decompression speed, change the block\n> size, and a few other parameters. We don't necessarily want to expose\n> all of those options, but we should structure things so that we could\n> if it became important. The way to do that is to make the compression\n> algorithm the primary setting, and then anything else you can set for\n> that compressor is somehow a subordinate setting.\n\nFor any compression method, that maps to an integer, so.. But I am\nnot going to fight hard on that.\n\n> Put another way, we don't decide first that we want to compress with\n> level 7, and then afterward decide whether that's gzip, lz4, or bzip2.\n> We pick the compressor first, and then MAYBE think about changing the\n> compression level.\n\nWhich is why things should be checked once all the options are\nprocessed. I'd recommend that you read the option patch a bit more,\nthat may help.\n\n> Well what I was looking at was this:\n> \n> - printf(_(\" -Z, --compress=0-9 compress tar output with given\n> compression level\\n\"));\n> + printf(_(\" -Z, --compress=1-9 compress tar output with given\n> compression level\\n\"));\n> + printf(_(\" --compression-method=METHOD\\n\"\n> + \" method to compress data\\n\"));\n> \n> That seems to show that, post-patch, the argument to -Z would be a\n> compression level, even if --compression-method were something other\n> than gzip.\n\nYes, after the patch --compress would be a compression level. And, if\nattempting to use with --compression-method set to \"none\", or\npotentially \"lz4\", it would just fail. If not using this \"gzip\", the\ncompression level is switched to Z_DEFAULT_COMPRESSION. That's this\narea of the patch, FWIW:\n+ /*\n+ * Compression-related options.\n+ */\n+ switch (compression_method)\n\n> It's possible that I haven't read something carefully enough, but to\n> me, what I said seems to be a straightforward conclusion based on\n> looking at the usage help in the patch. So if I came to the wrong\n> conclusion, perhaps that usage help isn't reflecting the situation you\n> intend to create, or not as clearly as it ought.\n\nPerhaps the --help output could be clearer, then. Do you have a\nsuggestion?\n\nBringing walmethods.c at the same page for the directory and the tar\nmethods was my primary goal here, and the tests are a bonus, so I've\napplied this part for now, leaving pg_basebackup alone until we figure\nout the layer of options we should use. Perhaps it would be better to\nrevisit that stuff once the server-side compression has landed.\n--\nMichael", "msg_date": "Fri, 7 Jan 2022 15:43:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Fri, Jan 7, 2022 at 1:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Which is why things should be checked once all the options are\n> processed. I'd recommend that you read the option patch a bit more,\n> that may help.\n\nI don't think that the problem here is my lack of understanding. I\nhave two basic concerns about your proposed patch:\n\n1. If, as you propose, we add a new flag --compression-method=METHOD\nthen how will the user specify server-side compression?\n2. If, as we seem to agree, the compression method is more important\nthan the compression level, then why is the option to set the\nless-important thing called just --compress, and the option to set the\nmore important thing has a longer name?\n\nI proposed to solve both of these problems by using\n--compression-level=NUMBER to set the compression level and\n--compress=METHOD or --server-compress=METHOD to set the algorithm and\nspecify on which side it is to be applied. If, instead of doing that,\nwe go with what you have proposed here, then I don't understand how to\nfit server-side compression into the framework in a reasonably concise\nway. I think we would end up with something like pg_basebackup\n--compression-method=lz4 --compress-on-server, which seems rather long\nand awkward. Do you have a better idea?\n\nI think I understand what the patch is doing. I just think it creates\na problem for my patch. And I'd like to know whether you have an idea\nhow to solve that problem. And if not, then I'd like you to consider\nthe solution that I am proposing rather than the patch you've already\ngot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 13:36:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 13, 2022 at 01:36:00PM -0500, Robert Haas wrote:\n> 1. If, as you propose, we add a new flag --compression-method=METHOD\n> then how will the user specify server-side compression?\n\nThis would require a completely different option switch, which is\nbasically the same thing as what you are suggesting with\n--server-compress.\n\n> 2. If, as we seem to agree, the compression method is more important\n> than the compression level, then why is the option to set the\n> less-important thing called just --compress, and the option to set the\n> more important thing has a longer name?\n\nI agree that the method is more important than the level for most\nusers, and I would not mind dropping completely --compress in favor of\nsomething else, which is something I implied upthread.\n\n> I proposed to solve both of these problems by using\n> --compression-level=NUMBER to set the compression level and\n> --compress=METHOD or --server-compress=METHOD to set the algorithm and\n> specify on which side it is to be applied. If, instead of doing that,\n> we go with what you have proposed here, then I don't understand how to\n> fit server-side compression into the framework in a reasonably concise\n> way. I think we would end up with something like pg_basebackup\n> --compression-method=lz4 --compress-on-server, which seems rather long\n> and awkward. Do you have a better idea?\n\nUsing --compression-level=NUMBER and --server-compress=METHOD to\nspecify a server-side compression method with a level is fine by me,\nbut I find the reuse of --compress to specify a compression method \nconfusing as it maps with the past option we have kept in\npg_basebackup for a couple of years now. Based on your suggested set\nof options, we could then have a --client-compress=METHOD and\n--compression-level=NUMBER to specify a client-side compression method\nwith a level. If we do that, I guess that we should then:\n1) Block the combination of --server-compress and --client-compress.\n2) Remove the existing -Z/--compress and -z/--gzip.\n\nYou have implied 1) upthread as far as I recall, 2) is something I am\nadding on top of it.\n\n> I think I understand what the patch is doing. I just think it creates\n> a problem for my patch. And I'd like to know whether you have an idea\n> how to solve that problem. And if not, then I'd like you to consider\n> the solution that I am proposing rather than the patch you've already\n> got.\n\nI am fine to drop this thread's patch with its set of options and work\non top of your proposal, aka what's drafted two paragraphs above.\n--\nMichael", "msg_date": "Fri, 14 Jan 2022 12:23:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 13, 2022 at 10:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Using --compression-level=NUMBER and --server-compress=METHOD to\n> specify a server-side compression method with a level is fine by me,\n> but I find the reuse of --compress to specify a compression method\n> confusing as it maps with the past option we have kept in\n> pg_basebackup for a couple of years now. Based on your suggested set\n> of options, we could then have a --client-compress=METHOD and\n> --compression-level=NUMBER to specify a client-side compression method\n> with a level. If we do that, I guess that we should then:\n> 1) Block the combination of --server-compress and --client-compress.\n> 2) Remove the existing -Z/--compress and -z/--gzip.\n\nI could live with that. I'm not sure that --client-compress instead of\nreusing --compress is going to be better ... but I don't think it's\nawful so much as just not my first choice. I also don't think it would\nbe horrid to leave -z, --gzip, and -Z as shorthands for the\n--client-compress=gzip with --compression-level also in the last case,\ninstead of removing all that stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Jan 2022 16:53:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Fri, Jan 14, 2022 at 04:53:12PM -0500, Robert Haas wrote:\n> On Thu, Jan 13, 2022 at 10:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Using --compression-level=NUMBER and --server-compress=METHOD to\n>> specify a server-side compression method with a level is fine by me,\n>> but I find the reuse of --compress to specify a compression method\n>> confusing as it maps with the past option we have kept in\n>> pg_basebackup for a couple of years now. Based on your suggested set\n>> of options, we could then have a --client-compress=METHOD and\n>> --compression-level=NUMBER to specify a client-side compression method\n>> with a level. If we do that, I guess that we should then:\n>> 1) Block the combination of --server-compress and --client-compress.\n>> 2) Remove the existing -Z/--compress and -z/--gzip.\n> \n> I could live with that. I'm not sure that --client-compress instead of\n> reusing --compress is going to be better ... but I don't think it's\n> awful so much as just not my first choice. I also don't think it would\n> be horrid to leave -z, --gzip, and -Z as shorthands for the\n> --client-compress=gzip with --compression-level also in the last case,\n> instead of removing all that stuff.\n\nOkay. So, based on this feedback, I guess that something like the\nattached would be what we are looking for. I have maximized the\namount of code removed with the removal of -z/-Z, but I won't fight\nhard if the consensus is to keep them, either. We could also keep\n-z/--gzip, and stick -Z to the new --compression-level with\n--compress removed.\n--\nMichael", "msg_date": "Sat, 15 Jan 2022 11:54:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Fri, Jan 14, 2022 at 10:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 13, 2022 at 10:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Using --compression-level=NUMBER and --server-compress=METHOD to\n> > specify a server-side compression method with a level is fine by me,\n> > but I find the reuse of --compress to specify a compression method\n> > confusing as it maps with the past option we have kept in\n> > pg_basebackup for a couple of years now. Based on your suggested set\n> > of options, we could then have a --client-compress=METHOD and\n> > --compression-level=NUMBER to specify a client-side compression method\n> > with a level. If we do that, I guess that we should then:\n> > 1) Block the combination of --server-compress and --client-compress.\n> > 2) Remove the existing -Z/--compress and -z/--gzip.\n>\n> I could live with that. I'm not sure that --client-compress instead of\n> reusing --compress is going to be better ... but I don't think it's\n> awful so much as just not my first choice. I also don't think it would\n> be horrid to leave -z, --gzip, and -Z as shorthands for the\n> --client-compress=gzip with --compression-level also in the last case,\n> instead of removing all that stuff.\n\nIt never makes sense to compress *both* in server and client, right?\n\nOne argument in that case for using --compress would be that we could\nhave that one take options like --compress=gzip (use gzip in the\nclient) and --compress=server-lz4 (use lz4 on the server), and\nautomatically make it impossible to do both. And maybe also accept\n--compress=client-gzip (which would be the same as just specifying\ngzip).\n\nThat would be an argument for actually keeping --compress and not\nusing --client-compress, because obviously it would be silly to have\n--client-compress=server-lz4...\n\nAnd yes, I agree that considering both server and client compression\neven if we don't have server compression makes sense, since we don't\nwant to change things around again when we get it.\n\nWe could perhaps also consider accepting --compress=gzip:7\n(<method>:<level>) as a way to specify the level, for both client and\nserver side.\n\nI think having --client-compress and --server-compress separately but\nhaving --compression-level *not* being separate would be confusing and\nI *think* that's what the current patch proposes?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sat, 15 Jan 2022 16:15:26 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Sat, Jan 15, 2022 at 04:15:26PM +0100, Magnus Hagander wrote:\n> I think having --client-compress and --server-compress separately but\n> having --compression-level *not* being separate would be confusing and\n> I *think* that's what the current patch proposes?\n\nYep, your understanding is right. The last version of the patch\nposted does exactly that.\n--\nMichael", "msg_date": "Mon, 17 Jan 2022 12:56:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Fri, Jan 14, 2022 at 9:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Okay. So, based on this feedback, I guess that something like the\n> attached would be what we are looking for. I have maximized the\n> amount of code removed with the removal of -z/-Z, but I won't fight\n> hard if the consensus is to keep them, either. We could also keep\n> -z/--gzip, and stick -Z to the new --compression-level with\n> --compress removed.\n\nI mean, I really don't understand the benefit of removing -z and -Z.\n-z can remain a synonym for --client-compress=gzip and -Z for\n--client-compress=gzip --compression-level=$N and nobody will be\nharmed. Taking them out reduces backward compatibility for no gain\nthat I can see.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 09:14:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Sat, Jan 15, 2022 at 10:15 AM Magnus Hagander <magnus@hagander.net> wrote:\n> It never makes sense to compress *both* in server and client, right?\n\nYeah.\n\n> One argument in that case for using --compress would be that we could\n> have that one take options like --compress=gzip (use gzip in the\n> client) and --compress=server-lz4 (use lz4 on the server), and\n> automatically make it impossible to do both. And maybe also accept\n> --compress=client-gzip (which would be the same as just specifying\n> gzip).\n>\n> That would be an argument for actually keeping --compress and not\n> using --client-compress, because obviously it would be silly to have\n> --client-compress=server-lz4...\n\nI still like distinguishing it using the option name, but differently:\n--compress=METHOD and --server-compress=METHOD. But this is also a\nreasonable proposal.\n\n> And yes, I agree that considering both server and client compression\n> even if we don't have server compression makes sense, since we don't\n> want to change things around again when we get it.\n\nEspecially not because I'm pretty close to having a committable patch\nand intend to try to get this into v15. See the refactoring\nbasebackup.c thread.\n\n> We could perhaps also consider accepting --compress=gzip:7\n> (<method>:<level>) as a way to specify the level, for both client and\n> server side.\n\nThat's not crazy either. Initially I was thinking --compression=gzip7\nbut then it turns out lz4 is one of the methods we want to use, and\nlz47 would be, err, slightly unclear. lz4:7 is better, for sure.\n\n> I think having --client-compress and --server-compress separately but\n> having --compression-level *not* being separate would be confusing and\n> I *think* that's what the current patch proposes?\n\nDepends on what you mean by \"separate\". There's no proposal to have\n--client-compression-level and also --server-compression-level.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 09:18:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 3:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jan 15, 2022 at 10:15 AM Magnus Hagander <magnus@hagander.net> wrote:\n> > It never makes sense to compress *both* in server and client, right?\n>\n> Yeah.\n>\n> > One argument in that case for using --compress would be that we could\n> > have that one take options like --compress=gzip (use gzip in the\n> > client) and --compress=server-lz4 (use lz4 on the server), and\n> > automatically make it impossible to do both. And maybe also accept\n> > --compress=client-gzip (which would be the same as just specifying\n> > gzip).\n> >\n> > That would be an argument for actually keeping --compress and not\n> > using --client-compress, because obviously it would be silly to have\n> > --client-compress=server-lz4...\n>\n> I still like distinguishing it using the option name, but differently:\n> --compress=METHOD and --server-compress=METHOD. But this is also a\n> reasonable proposal.\n>\n> > And yes, I agree that considering both server and client compression\n> > even if we don't have server compression makes sense, since we don't\n> > want to change things around again when we get it.\n>\n> Especially not because I'm pretty close to having a committable patch\n> and intend to try to get this into v15. See the refactoring\n> basebackup.c thread.\n>\n> > We could perhaps also consider accepting --compress=gzip:7\n> > (<method>:<level>) as a way to specify the level, for both client and\n> > server side.\n>\n> That's not crazy either. Initially I was thinking --compression=gzip7\n> but then it turns out lz4 is one of the methods we want to use, and\n> lz47 would be, err, slightly unclear. lz4:7 is better, for sure.\n>\n> > I think having --client-compress and --server-compress separately but\n> > having --compression-level *not* being separate would be confusing and\n> > I *think* that's what th e current patch proposes?\n>\n> Depends on what you mean by \"separate\". There's no proposal to have\n> --client-compression-level and also --server-compression-level.\n\nI mean that I think it would be confusing to have\n--client-compression=x, --server-compression=y, and\ncompression-level=z as the options. Why, in that scenario, does the\n\"compression\" part get two parameters, but the \"compression level\"\npart get one. In that case, there should either be --compression=x and\n--compression-level=z (which is what I'd suggest, per above), or there\nshould be --client-compression, --server-compression,\n--client-compression-level and --server-compression-level, for it to\nbe consistent. But having one of them be split in two parameters and\nthe other one not, is what I'd consider confusing.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 Jan 2022 15:27:38 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 4:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 15, 2022 at 04:15:26PM +0100, Magnus Hagander wrote:\n> > I think having --client-compress and --server-compress separately but\n> > having --compression-level *not* being separate would be confusing and\n> > I *think* that's what the current patch proposes?\n>\n> Yep, your understanding is right. The last version of the patch\n> posted does exactly that.\n\nOk. Then that is exactly what I think is confusing, and thus object to :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 Jan 2022 15:29:02 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 9:27 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I mean that I think it would be confusing to have\n> --client-compression=x, --server-compression=y, and\n> compression-level=z as the options. Why, in that scenario, does the\n> \"compression\" part get two parameters, but the \"compression level\"\n> part get one. In that case, there should either be --compression=x and\n> --compression-level=z (which is what I'd suggest, per above), or there\n> should be --client-compression, --server-compression,\n> --client-compression-level and --server-compression-level, for it to\n> be consistent. But having one of them be split in two parameters and\n> the other one not, is what I'd consider confusing.\n\nI don't find that confusing, but confusion is a pretty subjective\nexperience so that doesn't really prove anything. Of the two\nalternatives that you propose, I prefer --compress=[\"server-\"]METHOD\nand --compression-level=NUMBER to having both\n--client-compression-level and --server-compression-level. To me,\nthat's still a bit more surprising than my proposal, because having\nthe client compress stuff and having the server compress stuff feel\nlike somewhat different kinds of things ... but it's unsurprising that\nI like my own proposal, and what really matters is that we converge\nrelatively quickly on something we can all live with.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 10:16:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On 2022-Jan-17, Robert Haas wrote:\n\n> Of the two\n> alternatives that you propose, I prefer --compress=[\"server-\"]METHOD\n> and --compression-level=NUMBER to having both\n> --client-compression-level and --server-compression-level. To me,\n> that's still a bit more surprising than my proposal, because having\n> the client compress stuff and having the server compress stuff feel\n> like somewhat different kinds of things ... but it's unsurprising that\n> I like my own proposal, and what really matters is that we converge\n> relatively quickly on something we can all live with.\n\nI think having a single option where you specify everything is simpler.\nI propose we accept these forms:\n\n--compress=[{server,client}-]method[:level]\tnew in 15\n--compress=level\t\t(accepted by 14)\n-Z level\t\t\t(accepted by 14)\n-z\t\t\t\t(accepted by 14)\n\nThis way, compatibility with the existing release is maintained; and we\nintroduce all the new functionality without cluttering the interface.\n\nSo starting from 15, in addition to the already supported forms, users\nwill be able to do\n\n--compress=server-gzip:8\t(completely specified options)\n--compress=client-lz4\t\t(client-side lz4 compression, default level)\n--compress=zstd\t\t\t(server-side zstd compression)\n\nthere's a bit of string parsing required to implement, but that seems\nokay to me -- the UI seems clear enough and easily documented.\n\n\n\nOne missing feature in this spec is the ability to specify compression\nto be used with whatever the default method is. I'm not sure we want to\nallow for that, but it could be\n--compress=client\n--compress=server\nwhich uses whatever method is default, with whatever level is default,\nat either side.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 17 Jan 2022 12:41:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 8:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 17, 2022 at 9:27 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > I mean that I think it would be confusing to have\n> > --client-compression=x, --server-compression=y, and\n> > compression-level=z as the options. Why, in that scenario, does the\n> > \"compression\" part get two parameters, but the \"compression level\"\n> > part get one. In that case, there should either be --compression=x and\n> > --compression-level=z (which is what I'd suggest, per above), or there\n> > should be --client-compression, --server-compression,\n> > --client-compression-level and --server-compression-level, for it to\n> > be consistent. But having one of them be split in two parameters and\n> > the other one not, is what I'd consider confusing.\n>\n> I don't find that confusing, but confusion is a pretty subjective\n> experience so that doesn't really prove anything. Of the two\n> alternatives that you propose, I prefer --compress=[\"server-\"]METHOD\n> and --compression-level=NUMBER to having both\n> --client-compression-level and --server-compression-level. To me,\n> that's still a bit more surprising than my proposal, because having\n> the client compress stuff and having the server compress stuff feel\n> like somewhat different kinds of things ... but it's unsurprising that\n> I like my own proposal, and what really matters is that we converge\n> relatively quickly on something we can all live with.\n>\n>\nQuick look-over of the email thread:\n\nThe bare \"--compress\" option isn't liked anymore. I would prefer that we\nofficially deprecate -z, -Z, and --compress but otherwise leave them alone\nfor backward compatibility.\n\nWe do not want to entertain performing both server and client compression.\nIt thus seems undesirable to have different sets of options for them.\nTherefore:\n\n--compression-method={gzip|lz4|...}\n--compression-level={string} (which can be any string value, the validation\nlogic for compression-method will evaluate what is provided and error if it\nis not happy, each method would have its own default)\n--compression-location={client|server} (Can be added once server\ncompression is active. I would suggest it would default to server-side\ncompression - which would be a change in behavior by necessity)\n\nIf you really want a concise option here I say we make available:\n\n--compression={method}[;string][;{client|server}]\n\nThe two trailing optional (with default) sub-arguments are unambiguous as\nto which one is present if only two sub-arguments are provided.\n\nDavid J.\n\nOn Mon, Jan 17, 2022 at 8:17 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 17, 2022 at 9:27 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I mean that I think it would be confusing to have\n> --client-compression=x, --server-compression=y, and\n> compression-level=z as the options. Why, in that scenario, does the\n> \"compression\" part get two parameters, but the \"compression level\"\n> part get one. In that case, there should either be --compression=x and\n> --compression-level=z (which is what I'd suggest, per above), or there\n> should be --client-compression, --server-compression,\n> --client-compression-level and --server-compression-level, for it to\n> be consistent. But having one of them be split in two parameters and\n> the other one not, is what I'd consider confusing.\n\nI don't find that confusing, but confusion is a pretty subjective\nexperience so that doesn't really prove anything. Of the two\nalternatives that you propose, I prefer --compress=[\"server-\"]METHOD\nand --compression-level=NUMBER to having both\n--client-compression-level and --server-compression-level. To me,\nthat's still a bit more surprising than my proposal, because having\nthe client compress stuff and having the server compress stuff feel\nlike somewhat different kinds of things ... but it's unsurprising that\nI like my own proposal, and what really matters is that we converge\nrelatively quickly on something we can all live with.Quick look-over of the email thread:The bare \"--compress\" option isn't liked anymore.  I would prefer that we officially deprecate -z, -Z, and --compress but otherwise leave them alone for backward compatibility.We do not want to entertain performing both server and client compression.  It thus seems undesirable to have different sets of options for them.  Therefore:--compression-method={gzip|lz4|...}--compression-level={string} (which can be any string value, the validation logic for compression-method will evaluate what is provided and error if it is not happy, each method would have its own default)--compression-location={client|server} (Can be added once server compression is active.  I would suggest it would default to server-side compression - which would be a change in behavior by necessity)If you really want a concise option here I say we make available:--compression={method}[;string][;{client|server}]The two trailing optional (with default) sub-arguments are unambiguous as to which one is present if only two sub-arguments are provided.David J.", "msg_date": "Mon, 17 Jan 2022 08:42:12 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 8:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Jan-17, Robert Haas wrote:\n>\n> > Of the two\n> > alternatives that you propose, I prefer --compress=[\"server-\"]METHOD\n> > and --compression-level=NUMBER to having both\n> > --client-compression-level and --server-compression-level. To me,\n> > that's still a bit more surprising than my proposal, because having\n> > the client compress stuff and having the server compress stuff feel\n> > like somewhat different kinds of things ... but it's unsurprising that\n> > I like my own proposal, and what really matters is that we converge\n> > relatively quickly on something we can all live with.\n>\n> I think having a single option where you specify everything is simpler.\n> I propose we accept these forms:\n>\n> --compress=[{server,client}-]method[:level] new in 15\n> --compress=level (accepted by 14)\n> -Z level (accepted by 14)\n> -z (accepted by 14)\n>\n\nI am also in favor of this option. Whether this is better than deprecating\n--compress and introducing --compression I am having trouble deciding. My\npersonal preference is to add --compression and leave --compress alone and\ndeprecated; but we don't usually do anything with deprecations and having\nusers seeing both --compress and --compression out in the wild, even if\nnever at the same time, is bound to elicit questions (though so is seeing\n--compress with \"number only\" rules and \"composite value\" rules...)\n\n\n> This way, compatibility with the existing release is maintained; and we\n> introduce all the new functionality without cluttering the interface.\n>\n\nI would still \"clutter\" the interface with:\n\n--compress-method\n--compress-options (extending from my prior post, I would make this more\ngeneric - i.e., not named \"level\" - and deal with valid values, meaning,\nand format, in a per-method description in the documentation)\n--compress-location\n\nUsers have different preferences for what they want to use, and it provides\na level of self-documentation for the composite specification and a degree\nof explicitness for the actual documentation of the methods.\n\nOne missing feature in this spec is the ability to specify compression\n> to be used with whatever the default method is. I'm not sure we want to\n> allow for that\n>\n\nI'm not too keen on making a default method in code. Saying \"if in doubt\ngzip is a widely used compression method.\" in the documentation seems\nsufficient.\n\nDavid J.\n\nOn Mon, Jan 17, 2022 at 8:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Jan-17, Robert Haas wrote:\n\n> Of the two\n> alternatives that you propose, I prefer --compress=[\"server-\"]METHOD\n> and --compression-level=NUMBER to having both\n> --client-compression-level and --server-compression-level. To me,\n> that's still a bit more surprising than my proposal, because having\n> the client compress stuff and having the server compress stuff feel\n> like somewhat different kinds of things ... but it's unsurprising that\n> I like my own proposal, and what really matters is that we converge\n> relatively quickly on something we can all live with.\n\nI think having a single option where you specify everything is simpler.\nI propose we accept these forms:\n\n--compress=[{server,client}-]method[:level]     new in 15\n--compress=level                (accepted by 14)\n-Z level                        (accepted by 14)\n-z                              (accepted by 14)I am also in favor of this option.  Whether this is better than deprecating --compress and introducing --compression I am having trouble deciding.  My personal preference is to add --compression and leave --compress alone and deprecated; but we don't usually do anything with deprecations and having users seeing both --compress and --compression out in the wild, even if never at the same time, is bound to elicit questions (though so is seeing --compress with \"number only\" rules and \"composite value\" rules...)\n\nThis way, compatibility with the existing release is maintained; and we\nintroduce all the new functionality without cluttering the interface.I would still \"clutter\" the interface with:--compress-method--compress-options (extending from my prior post, I would make this more generic - i.e., not named \"level\" -  and deal with valid values, meaning, and format, in a per-method description in the documentation)--compress-locationUsers have different preferences for what they want to use, and it provides a level of self-documentation for the composite specification and a degree of explicitness for the actual documentation of the methods.\nOne missing feature in this spec is the ability to specify compression\nto be used with whatever the default method is.  I'm not sure we want to\nallow for thatI'm not too keen on making a default method in code.  Saying \"if in doubt gzip is a widely used compression method.\" in the documentation seems sufficient.David J.", "msg_date": "Mon, 17 Jan 2022 09:50:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 11:50 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> I think having a single option where you specify everything is simpler.\n>> I propose we accept these forms:\n>>\n>> --compress=[{server,client}-]method[:level] new in 15\n>> --compress=level (accepted by 14)\n>> -Z level (accepted by 14)\n>> -z (accepted by 14)\n>\n> I am also in favor of this option. Whether this is better than deprecating --compress and introducing --compression I am having trouble deciding. My personal preference is to add --compression and leave --compress alone and deprecated; but we don't usually do anything with deprecations and having users seeing both --compress and --compression out in the wild, even if never at the same time, is bound to elicit questions (though so is seeing --compress with \"number only\" rules and \"composite value\" rules...)\n\nAlvaro's proposal is fine with me. I don't see any value in replacing\n--compress with --compression. It's longer but not superior in any way\nthat I can see. Having both seems worst of all -- that's just\nconfusing.\n\n> I'm not too keen on making a default method in code. Saying \"if in doubt gzip is a widely used compression method.\" in the documentation seems sufficient.\n\nYeah, I agree that a default method doesn't seem necessary. People who\nwant to compress without thinking hard can use -z; others can say what\nthey want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 12:48:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 12:48:12PM -0500, Robert Haas wrote:\n> Alvaro's proposal is fine with me. I don't see any value in replacing\n> --compress with --compression. It's longer but not superior in any way\n> that I can see. Having both seems worst of all -- that's just\n> confusing.\n\nOkay, that looks like a consensus, then. Robert, would it be better\nto gather all that on the thread that deals with the server-side\ncompression? Doing that here would be fine by me, with the option to\nonly specify the client. Now it would be a bit weird to do things\nwith only the client part and not the server part :)\n--\nMichael", "msg_date": "Tue, 18 Jan 2022 10:36:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Mon, Jan 17, 2022 at 8:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Jan 17, 2022 at 12:48:12PM -0500, Robert Haas wrote:\n> > Alvaro's proposal is fine with me. I don't see any value in replacing\n> > --compress with --compression. It's longer but not superior in any way\n> > that I can see. Having both seems worst of all -- that's just\n> > confusing.\n>\n> Okay, that looks like a consensus, then. Robert, would it be better\n> to gather all that on the thread that deals with the server-side\n> compression? Doing that here would be fine by me, with the option to\n> only specify the client. Now it would be a bit weird to do things\n> with only the client part and not the server part :)\n\nI think it could make sense for you implement\n--compress=METHOD[:LEVEL], keeping -z and -Z N as synonyms for\n--compress=gzip and --compress=gzip:N, and with --compress=N being\ninterpreted as --compress=gzip:N. Then I'll generalize that to\n--compress=[{client|server}-]METHOD[:LEVEL] on the other thread. I\nthink we should leave it where, if you don't say either client or\nserver, you get client, because that's the historical behavior.\n\nIf that doesn't work for you, please let me know what you would prefer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jan 2022 10:04:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Tue, Jan 18, 2022 at 10:04:56AM -0500, Robert Haas wrote:\n> I think it could make sense for you implement\n> --compress=METHOD[:LEVEL], keeping -z and -Z N as synonyms for\n> --compress=gzip and --compress=gzip:N, and with --compress=N being\n> interpreted as --compress=gzip:N. Then I'll generalize that to\n> --compress=[{client|server}-]METHOD[:LEVEL] on the other thread. I\n> think we should leave it where, if you don't say either client or\n> server, you get client, because that's the historical behavior.\n> \n> If that doesn't work for you, please let me know what you would prefer.\n\nWFM. Attached is a patch that extends --compress to handle a method\nwith an optional compression level. Some extra tests are added to\ncover all that.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 19 Jan 2022 13:27:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Tue, Jan 18, 2022 at 11:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> WFM. Attached is a patch that extends --compress to handle a method\n> with an optional compression level. Some extra tests are added to\n> cover all that.\n\nI think that this will reject something like --compress=nonetheless by\ntelling you that 't' is not a valid separator. I think it would be\nbetter to code this so that we first identify the portion preceding\nthe first colon, or the whole string if there is no colon. Then we\ncheck whether that part is a compression method that we recognize. If\nnot, we complain. If so, we then check whatever is after the separator\nfor validity - and this might differ by type. For example, we could\nthen immediately reject none:4, and if in the future we want to allow\nlz4:fast3, we could.\n\nI think the code that handles the bare integer case should be at the\ntop of the function and should return, because that code is short.\nThen the rest of the function doesn't need to be indented as deeply.\n\n\"First check after the compression method\" seems like it would be\nbetter written \"First check for the compression method\" or \"First\ncheck the compression method\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 08:35:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On 2022-Jan-19, Michael Paquier wrote:\n\n> +\tprintf(_(\" -Z, --compress=[{gzip,none}[:LEVEL] or [LEVEL]\\n\"\n> +\t\t\t \" compress tar output with given compression method or level\\n\"));\n\nNote there is an extra [ before the {gzip bit.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n", "msg_date": "Wed, 19 Jan 2022 12:50:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wed, Jan 19, 2022 at 12:50:44PM -0300, Alvaro Herrera wrote:\n> Note there is an extra [ before the {gzip bit.\n\nThanks!\n--\nMichael", "msg_date": "Thu, 20 Jan 2022 14:32:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Wed, Jan 19, 2022 at 08:35:23AM -0500, Robert Haas wrote:\n> I think that this will reject something like --compress=nonetheless by\n> telling you that 't' is not a valid separator. I think it would be\n> better to code this so that we first identify the portion preceding\n> the first colon, or the whole string if there is no colon. Then we\n> check whether that part is a compression method that we recognize. If\n> not, we complain.\n\nWell, if no colon is specified, we still need to check if optarg\nis a pure integer if it does not match any of the supported methods,\nas --compress=0 should be backward compatible with no compression and\n--compress=1~9 should imply gzip, no?\n\n> If so, we then check whatever is after the separator\n> for validity - and this might differ by type. For example, we could\n> then immediately reject none:4, and if in the future we want to allow\n> lz4:fast3, we could.\n\nOkay.\n\n> I think the code that handles the bare integer case should be at the\n> top of the function and should return, because that code is short.\n> Then the rest of the function doesn't need to be indented as deeply.\n\nDone this way, I hope.\n--\nMichael", "msg_date": "Thu, 20 Jan 2022 16:03:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 20, 2022 at 2:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Well, if no colon is specified, we still need to check if optarg\n> is a pure integer if it does not match any of the supported methods,\n> as --compress=0 should be backward compatible with no compression and\n> --compress=1~9 should imply gzip, no?\n\nYes.\n\n> Done this way, I hope.\n\nThis looks better, but this part could be switched around:\n\n+ /*\n+ * Check if the first part of the string matches with a supported\n+ * compression method.\n+ */\n+ if (pg_strcasecmp(firstpart, \"gzip\") != 0 &&\n+ pg_strcasecmp(firstpart, \"none\") != 0)\n+ {\n+ /*\n+ * It does not match anything known, so check for the\n+ * backward-compatible case of only an integer, where the implied\n+ * compression method changes depending on the level value.\n+ */\n+ if (!option_parse_int(firstpart, \"-Z/--compress\", 0,\n+ INT_MAX, levelres))\n+ exit(1);\n+\n+ *methodres = (*levelres > 0) ?\n+ COMPRESSION_GZIP : COMPRESSION_NONE;\n+ return;\n+ }\n+\n+ /* Supported method found. */\n+ if (pg_strcasecmp(firstpart, \"gzip\") == 0)\n+ *methodres = COMPRESSION_GZIP;\n+ else if (pg_strcasecmp(firstpart, \"none\") == 0)\n+ *methodres = COMPRESSION_NONE;\n\nYou don't need to test for gzip and none in two places each. Just make\nthe block with the \"It does not match ...\" comment the \"else\" clause\nfor this last part.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jan 2022 10:25:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 20, 2022 at 10:25:43AM -0500, Robert Haas wrote:\n> You don't need to test for gzip and none in two places each. Just make\n> the block with the \"It does not match ...\" comment the \"else\" clause\n> for this last part.\n\nIndeed, that looks better. I have done an extra pass on this stuff\nthis morning, and applied it, so we should be done here.\n--\nMichael", "msg_date": "Fri, 21 Jan 2022 11:18:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Thu, Jan 20, 2022 at 9:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jan 20, 2022 at 10:25:43AM -0500, Robert Haas wrote:\n> > You don't need to test for gzip and none in two places each. Just make\n> > the block with the \"It does not match ...\" comment the \"else\" clause\n> > for this last part.\n>\n> Indeed, that looks better. I have done an extra pass on this stuff\n> this morning, and applied it, so we should be done here.\n\nThanks. One thing I just noticed is that the enum we're using here is\ncalled WalCompressionMethod. But we're not compressing WAL. We're\ncompressing tarfiles of the data directory.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Jan 2022 09:57:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Fri, Jan 21, 2022 at 09:57:41AM -0500, Robert Haas wrote:\n> Thanks. One thing I just noticed is that the enum we're using here is\n> called WalCompressionMethod. But we're not compressing WAL. We're\n> compressing tarfiles of the data directory.\n\nAlso, having this enum in walmethods.h is perhaps not the best place\neither, even more if you plan to use that in pg_basebackup for the\nserver-side compression. One idea is to rename this enum to\nDataCompressionMethod, moving it into a new header, like common.h as\nof the attached.\n--\nMichael", "msg_date": "Sat, 22 Jan 2022 14:46:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Sat, Jan 22, 2022 at 12:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Also, having this enum in walmethods.h is perhaps not the best place\n> either, even more if you plan to use that in pg_basebackup for the\n> server-side compression. One idea is to rename this enum to\n> DataCompressionMethod, moving it into a new header, like common.h as\n> of the attached.\n\nWell, we also have CompressionAlgorithm competing for the same job.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jan 2022 15:14:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Tue, Jan 25, 2022 at 03:14:13PM -0500, Robert Haas wrote:\n> On Sat, Jan 22, 2022 at 12:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Also, having this enum in walmethods.h is perhaps not the best place\n>> either, even more if you plan to use that in pg_basebackup for the\n>> server-side compression. One idea is to rename this enum to\n>> DataCompressionMethod, moving it into a new header, like common.h as\n>> of the attached.\n> \n> Well, we also have CompressionAlgorithm competing for the same job.\n\nSure, but I don't think that it is a good idea to unify that yet, at\nleast not until pg_dump is able to handle LZ4 as an option, as the\nmain benefit that we'd gain here is to be able to change the code to a\nswitch/case without defaults where we would detect code paths that\nrequire a refresh once adding support for a new option.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 10:15:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" }, { "msg_contents": "On Tue, Jan 25, 2022 at 8:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Sure, but I don't think that it is a good idea to unify that yet, at\n> least not until pg_dump is able to handle LZ4 as an option, as the\n> main benefit that we'd gain here is to be able to change the code to a\n> switch/case without defaults where we would detect code paths that\n> require a refresh once adding support for a new option.\n\nI think those places could just throw a \"lz4 compression is not\nsupported\" elog() and then you could just grep for everyplace where\nthat string appears. But I am not of a mind to fight about it. I was\njust pointing out the duplication.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jan 2022 15:24:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring of compression options in pg_basebackup" } ]
[ { "msg_contents": "I did some desultory investigation of the idea proposed at [1]\nthat we should refactor the regression test scripts to try to\nreduce their interdependencies. I soon realized that one of\nthe stumbling blocks to this is that we've tended to concentrate\ndata-loading COPY commands, as well as C function creation\ncommands, into a few files to reduce the notational cruft of\nsubstituting path names and the like into the test scripts.\nThat is, we don't want to have even more scripts that have to be\ntranslated from input/foo.source and output/foo.source into\nrunnable scripts and test results.\n\nThis led me to wonder why we couldn't get rid of that entire\nmechanism in favor of some less-painful way of getting that\ninformation into the scripts. If we had the desired values in\npsql variables, we could do what we need easily, for example\ninstead of\n\nCREATE FUNCTION check_primary_key ()\n\tRETURNS trigger\n\tAS '@libdir@/refint@DLSUFFIX@'\n\tLANGUAGE C;\n\nsomething like\n\nCREATE FUNCTION check_primary_key ()\n\tRETURNS trigger\n\tAS :'LIBDIR'\n\t'/refint'\n\t:'DLSUFFIX'\n\tLANGUAGE C;\n\n(The extra line breaks are needed to convince SQL that the\nadjacent string literals should be concatenated. We couldn't\nhave done this so easily before psql had the :'variable'\nnotation, but that came in in 9.0.)\n\nI see two ways we could get the info from pg_regress into psql\nvariables:\n\n1. Add \"-v VARIABLE=VALUE\" switches to the psql invocations.\nThis requires no new psql capability, but it does introduce\nthe problem of getting correct shell quoting of the values.\nI think we'd need to either duplicate appendShellString in\npg_regress.c, or start linking both libpq and libpgfeutils.a\ninto pg_regress to be able to use appendShellString itself.\nIn the past we've not wanted to link libpq into pg_regress\n(though I admit I've forgotten the argument for not doing so).\n\n2. Export the values from pg_regress as environment variables,\nand then add a way for the test scripts to read those variables.\nI was a bit surprised to realize that we didn't have any way\nto do that already --- psql has \\setenv, so why did we never\ninvent \\getenv?\n\nOn the whole I prefer #2, as it seems cleaner and it adds some\nactually useful-to-end-users psql functionality.\n\nAttached is a really incomplete, quick-n-dirty POC showing that\nthis can be made to work. If there aren't objections or better\nideas, I'll see about fleshing this out.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20211217182518.GA2529654%40rfd.leadboat.com", "msg_date": "Sat, 18 Dec 2021 18:53:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting rid of regression test input/ and output/ files" }, { "msg_contents": "I wrote:\n> This led me to wonder why we couldn't get rid of that entire\n> mechanism in favor of some less-painful way of getting that\n> information into the scripts. If we had the desired values in\n> psql variables, we could do what we need easily, for example ...\n\nHere's some fleshed-out patches for this.\n\n0001 adds the \\getenv command to psql; now with documentation\nand a simple regression test.\n\n0002 tweaks pg_regress to export the needed values as environment\nvariables, and modifies the test scripts to use those variables.\n(For ease of review, this patch modifies the scripts in-place,\nand then 0003 will move them.) A few comments on this:\n\n* I didn't see any value in exporting @testtablespace@ as a separate\nvariable; we might as well let the test script know how to construct\nthat path name.\n\n* I concluded that the right way to handle the concatenation issue\nis *not* to rely on SQL literal concatenation, but to use psql's\n\\set command to concatenate parts of a string. In particular this\ngives us a clean way to handle quoting/escaping rules in the places\nwhere a pathname has to be embedded in some larger string, such as\na function body. The golden rule for that seems to be \"use one \\set\nper level of quoting\". I believe this code is now fairly proof\nagainst situations that would completely break the existing way of\ndoing things, such as pathnames with quotes or backslashes in them.\n(It's hard to test the embedded-quote case, because that breaks the\nMakefiles too, but I did get through the regression tests with a\npath including a backslash.)\n\n* There are a couple of places where the existing tests involve\nsubstituting a path name into expected query output or error messages.\nThis technique cannot handle that, but we have plenty of prior art for\ndealing with such cases. I changed file_fdw to use a filter function\nto hide the pathnames in EXPLAIN output, and tweaked create_function_0\nto show only an edited version of an error message (this is based on a\nsimilar case in infinite_recurse.sql).\n\n0003 simply \"git mv\"'s the scripts and output files into place as\nnormal not-requiring-editing files. Be careful to \"make clean\"\nbefore applying this, else you may have conflicts with the target\nfiles already being present. Also, while you can run the tests\nbetween 0003 and 0004, don't do \"make clean\" in this state or the\nhacky EXTRA_CLEAN rules in dblink and file_fdw will remove files\nyou want.\n\n0004 finally removes the no-longer-needed infrastructure in\npg_regress and the makefiles. (BTW, as far as I can find, the\nMSVC scripts have no provisions for cleaning these generated files?)\n\nThere's some refactoring that could be done afterwards, for example\nthere seems little reason for dblink's paths.sql to continue to exist\nas a separate script. But it seemed best for this patch series to\nconvert the scripts as mechanically as possible.\n\nI'm fairly pleased with how this came out. I think these scripts\nwill be *much* easier to maintain in this form. Updating the\noutput/*.source files was always a major pain in the rear, since\nyou couldn't just copy results/ files to them.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 19 Dec 2021 16:08:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": ">\n>\n> 0001 adds the \\getenv command to psql; now with documentation\n> and a simple regression test.\n>\n\n+1. Wish I had added this years ago when I had a need for it.\n\n\n>\n> 0002 tweaks pg_regress to export the needed values as environment\n> variables, and modifies the test scripts to use those variables.\n> (For ease of review, this patch modifies the scripts in-place,\n> and then 0003 will move them.) A few comments on this:\n>\n> * I didn't see any value in exporting @testtablespace@ as a separate\n> variable; we might as well let the test script know how to construct\n> that path name.\n>\n> * I concluded that the right way to handle the concatenation issue\n> is *not* to rely on SQL literal concatenation, but to use psql's\n> \\set command to concatenate parts of a string. In particular this\n>\n\n+1 to that, much better than the multi-line thing.\n\nI have a nitpick about the \\getenv FOO FOO lines.\nIt's a new function to everyone, and to anyone who hasn't seen the\ndocumentation it won't be immediately obvious which one is the ENV var and\nwhich one is the local var. Lowercasing the local var would be a way to\nreinforce which is which to the reader. It would also be consistent with\nvar naming in the rest of the script.\n\n\n>\n> 0004 finally removes the no-longer-needed infrastructure in\n>\n\n+1\nDeleted code is debugged code.\n\n0001 adds the \\getenv command to psql; now with documentation\nand a simple regression test.+1. Wish I had added this years ago when I had a need for it. \n\n0002 tweaks pg_regress to export the needed values as environment\nvariables, and modifies the test scripts to use those variables.\n(For ease of review, this patch modifies the scripts in-place,\nand then 0003 will move them.)  A few comments on this:\n\n* I didn't see any value in exporting @testtablespace@ as a separate\nvariable; we might as well let the test script know how to construct\nthat path name.\n\n* I concluded that the right way to handle the concatenation issue\nis *not* to rely on SQL literal concatenation, but to use psql's\n\\set command to concatenate parts of a string.  In particular this+1 to that, much better than the multi-line thing.I have a nitpick about the \\getenv FOO FOO lines.It's a new function to everyone, and to anyone who hasn't seen the documentation it won't be immediately obvious which one is the ENV var and which one is the local var. Lowercasing the local var would be a way to reinforce which is which to the reader. It would also be consistent with var naming in the rest of the script. \n0004 finally removes the no-longer-needed infrastructure in+1Deleted code is debugged code.", "msg_date": "Sun, 19 Dec 2021 17:34:02 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> I have a nitpick about the \\getenv FOO FOO lines.\n> It's a new function to everyone, and to anyone who hasn't seen the\n> documentation it won't be immediately obvious which one is the ENV var and\n> which one is the local var. Lowercasing the local var would be a way to\n> reinforce which is which to the reader. It would also be consistent with\n> var naming in the rest of the script.\n\nReasonable idea. Another thing I was wondering about was whether\nto attach PG_ prefixes to the environment variable names, since\nthose are in a more-or-less global namespace. If we do that,\nthen a different method for distinguishing the psql variables\nis to not prefix them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Dec 2021 17:48:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "On Sun, Dec 19, 2021 at 5:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > I have a nitpick about the \\getenv FOO FOO lines.\n> > It's a new function to everyone, and to anyone who hasn't seen the\n> > documentation it won't be immediately obvious which one is the ENV var\n> and\n> > which one is the local var. Lowercasing the local var would be a way to\n> > reinforce which is which to the reader. It would also be consistent with\n> > var naming in the rest of the script.\n>\n> Reasonable idea. Another thing I was wondering about was whether\n> to attach PG_ prefixes to the environment variable names, since\n> those are in a more-or-less global namespace. If we do that,\n> then a different method for distinguishing the psql variables\n> is to not prefix them.\n\n\n+1 to that as well.\n\nWhich brings up a tangential question, is there value in having something\nthat brings in one or more env vars as psql vars directly. I'm thinking\nsomething like:\n\n\\importenv pattern [prefix]\n\n\n(alternate names: \\getenv_multi \\getenv_pattern, \\getenvs, etc)\n\nwhich could be used like\n\n\\importenv PG* env_\n\n\nwhich would import PGFOO and PGBAR as env_PGFOO and env_PGBAR, awkward\nnames but leaving no doubt about where a previously unreferenced variable\ncame from.\n\nI don't *think* we need it for this specific case, but since the subject of\nenv vars has come up I thought I'd throw it out there.\n\nOn Sun, Dec 19, 2021 at 5:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I have a nitpick about the \\getenv FOO FOO lines.\n> It's a new function to everyone, and to anyone who hasn't seen the\n> documentation it won't be immediately obvious which one is the ENV var and\n> which one is the local var. Lowercasing the local var would be a way to\n> reinforce which is which to the reader. It would also be consistent with\n> var naming in the rest of the script.\n\nReasonable idea.  Another thing I was wondering about was whether\nto attach PG_ prefixes to the environment variable names, since\nthose are in a more-or-less global namespace.  If we do that,\nthen a different method for distinguishing the psql variables\nis to not prefix them.+1 to that as well.Which brings up a tangential question, is there value in having something that brings in one or more env vars as psql vars directly. I'm thinking something like:\\importenv pattern [prefix](alternate names: \\getenv_multi \\getenv_pattern, \\getenvs, etc)which could be used like\\importenv PG* env_which would import PGFOO and PGBAR as env_PGFOO and env_PGBAR, awkward names but leaving no doubt about where a previously unreferenced variable came from.I don't think we need it for this specific case, but since the subject of env vars has come up I thought I'd throw it out there.", "msg_date": "Sun, 19 Dec 2021 18:41:03 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> Which brings up a tangential question, is there value in having something\n> that brings in one or more env vars as psql vars directly. I'm thinking\n> something like:\n\n> \\importenv pattern [prefix]\n\nMeh ... considering we've gone this long without any getenv capability\nin psql at all, that seems pretty premature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Dec 2021 19:00:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "On Sun, Dec 19, 2021 at 7:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > Which brings up a tangential question, is there value in having something\n> > that brings in one or more env vars as psql vars directly. I'm thinking\n> > something like:\n>\n> > \\importenv pattern [prefix]\n>\n> Meh ... considering we've gone this long without any getenv capability\n> in psql at all, that seems pretty premature.\n>\n> regards, tom lane\n>\n\nFair enough.\n\nPatches didn't apply with `git apply` but did fine with `patch -p1`, from\nthere it passes make check-world.\n\nOn Sun, Dec 19, 2021 at 7:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> Which brings up a tangential question, is there value in having something\n> that brings in one or more env vars as psql vars directly. I'm thinking\n> something like:\n\n> \\importenv pattern [prefix]\n\nMeh ... considering we've gone this long without any getenv capability\nin psql at all, that seems pretty premature.\n\n                        regards, tom laneFair enough.Patches didn't apply with `git apply` but did fine with `patch -p1`, from there it passes make check-world.", "msg_date": "Sun, 19 Dec 2021 23:51:29 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "On 19.12.21 00:53, Tom Lane wrote:\n> 2. Export the values from pg_regress as environment variables,\n> and then add a way for the test scripts to read those variables.\n> I was a bit surprised to realize that we didn't have any way\n> to do that already --- psql has \\setenv, so why did we never\n> invent \\getenv?\n\nYou can do\n\n\\set foo `echo $ENVVAR`\n\nbut that's probably not portable enough for your purpose.\n\n\n", "msg_date": "Mon, 20 Dec 2021 15:05:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 19.12.21 00:53, Tom Lane wrote:\n>> I was a bit surprised to realize that we didn't have any way\n>> to do that already --- psql has \\setenv, so why did we never\n>> invent \\getenv?\n\n> You can do\n> \\set foo `echo $ENVVAR`\n> but that's probably not portable enough for your purpose.\n\nI suppose that wouldn't work on Windows, so no.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:09:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "\nOn 12/18/21 18:53, Tom Lane wrote:\n>\n> 2. Export the values from pg_regress as environment variables,\n> and then add a way for the test scripts to read those variables.\n> I was a bit surprised to realize that we didn't have any way\n> to do that already --- psql has \\setenv, so why did we never\n> invent \\getenv?\n\n\nI don't recall anyone expressing a need for it at the time we added\n\\setenv.\n\n+1 for adding it now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 20 Dec 2021 11:52:48 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" }, { "msg_contents": "On Sun, 19 Dec 2021 at 18:41, Corey Huinker <corey.huinker@gmail.com> wrote:\n>\n> Which brings up a tangential question, is there value in having something that brings in one or more env vars as psql vars directly. I'm thinking something like:\n>\n> \\importenv pattern [prefix]\n\nOof. That gives me the security heebie jeebies. Off the top of my head\nPHP, CGI, SSH have all dealt with vulnerabilities caused by\naccidentally importing variables they didn't intend to.\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 20 Dec 2021 23:58:07 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Getting rid of regression test input/ and output/ files" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I noticed $SUBJECT: postgres_fdw resets the\nper-connection states of connections, which store async requests sent\nto remote servers in async_capable mode, during post-abort\n(pgfdw_xact_callback()), but it fails to do so during post-subabort\n(pgfdw_subxact_callback()). This causes a crash when re-executing a\nquery that was aborted in a subtransaction:\n\npostgres=# create table t (a text, b text);\npostgres=# create or replace function slow_data (name text, duration\nfloat) returns setof t as $$ begin perform pg_sleep(duration); return\nquery select name, generate_series(1, 1000)::text; end; $$ language\nplpgsql;\npostgres=# create view v1 as select * from slow_data('foo', 2.5);\npostgres=# create view v2 as select * from slow_data('bar', 5.0);\npostgres=# create extension postgres_fdw;\npostgres=# create server loopback1 foreign data wrapper postgres_fdw\noptions (dbname 'postgres', async_capable 'true');\npostgres=# create server loopback2 foreign data wrapper postgres_fdw\noptions (dbname 'postgres', async_capable 'true');\npostgres=# create user mapping for current_user server loopback1;\npostgres=# create user mapping for current_user server loopback2;\npostgres=# create foreign table list_p1 (a text, b text) server\nloopback1 options (table_name 'v1');\npostgres=# create foreign table list_p2 (a text, b text) server\nloopback2 options (table_name 'v2');\npostgres=# create table list_pt (a text, b text) partition by list (a);\npostgres=# alter table list_pt attach partition list_p1 for values in ('foo');\npostgres=# alter table list_pt attach partition list_p2 for values in ('bar');\n\npostgres=# begin;\nBEGIN\npostgres=*# savepoint s1;\nSAVEPOINT\npostgres=*# select count(*) from list_pt;\n^CCancel request sent\nERROR: canceling statement due to user request\npostgres=!# rollback to savepoint s1;\nROLLBACK\npostgres=*# select count(*) from list_pt;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nWhen canceling the SELECT, an async request sent to a remote server\nusing a connection is canceled, but it’s stored in the per-connection\nstate of the connection even after the failed subtransaction for the\nreason above, so when re-executing the SELECT, postgres_fdw processes\nthe invalid async request to re-use the connection in GetConnection(),\ncausing a segmentation fault. This would be my oversight in commit\n27e1f1456. :-(\n\nTo fix, I modified pgfdw_abort_cleanup() to reset the per-connection\nstate in the post-subabort case as well. Also, I modified the\ninitialization so that it’s done only if necessary, to save cycles,\nand improved a comment on the initialization a bit. Attached is a\npatch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Sun, 19 Dec 2021 19:25:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: incomplete subabort cleanup of connections used in\n async execution" }, { "msg_contents": "Etsuro Fujita писал 2021-12-19 13:25:\n> Hi,\n> \n> While working on [1], I noticed $SUBJECT: postgres_fdw resets the\n> per-connection states of connections, which store async requests sent\n> to remote servers in async_capable mode, during post-abort\n> (pgfdw_xact_callback()), but it fails to do so during post-subabort\n> (pgfdw_subxact_callback()). This causes a crash when re-executing a\n> query that was aborted in a subtransaction:\n> \n\nHi.\nLooks good to me.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Tue, 21 Dec 2021 19:08:26 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: incomplete subabort cleanup of connections used in\n async execution" }, { "msg_contents": "Hi Alexander,\n\nOn Wed, Dec 22, 2021 at 1:08 AM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> Etsuro Fujita писал 2021-12-19 13:25:\n> > While working on [1], I noticed $SUBJECT: postgres_fdw resets the\n> > per-connection states of connections, which store async requests sent\n> > to remote servers in async_capable mode, during post-abort\n> > (pgfdw_xact_callback()), but it fails to do so during post-subabort\n> > (pgfdw_subxact_callback()). This causes a crash when re-executing a\n> > query that was aborted in a subtransaction:\n\n> Looks good to me.\n\nGreat! Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 22 Dec 2021 14:40:45 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: incomplete subabort cleanup of connections used in\n async execution" }, { "msg_contents": "On Wed, Dec 22, 2021 at 2:40 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Dec 22, 2021 at 1:08 AM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n> > Looks good to me.\n>\n> Great! Thanks for reviewing!\n\nI've committed the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 21 Jan 2022 18:01:07 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: incomplete subabort cleanup of connections used in\n async execution" } ]
[ { "msg_contents": "I reduced the problematic query to this.\n\nSELECT 1 FROM pg_rewrite WHERE\npg_get_function_arg_default(ev_class, 1) !~~\npg_get_expr(ev_qual, ev_class, false);\n\n#0 pg_re_throw () at elog.c:1800\n#1 0x0000563f5d027932 in errfinish () at elog.c:593\n#2 0x0000563f5cb874ee in resolve_special_varno (node=0x563f5dd0f7e0, context=0x7ffcf0daf250, callback=0x563f5cfca270 <get_special_variable>, callback_arg=0x0) at ruleutils.c:7319\n#3 0x0000563f5cfca044 in get_variable () at ruleutils.c:7086\n#4 0x0000563f5cfc7c58 in get_rule_expr () at ruleutils.c:8363\n#5 0x0000563f5cfc97a6 in get_oper_expr (context=0x7ffcf0daf250, expr=0x563f5dd0f6f0) at ruleutils.c:9626\n#6 get_rule_expr () at ruleutils.c:8472\n#7 0x0000563f5cfcdc37 in deparse_expression_pretty (expr=expr@entry=0x563f5dd0f6f0, dpcontext=0x563f5dd10488, forceprefix=forceprefix@entry=false, showimplicit=showimplicit@entry=false, \n prettyFlags=prettyFlags@entry=2, startIndent=0) at ruleutils.c:3558\n#8 0x0000563f5cfce661 in pg_get_expr_worker (expr=<optimized out>, relid=12104, relname=0x563f5dd10130 \"pg_settings\", prettyFlags=2) at ruleutils.c:2645\n#9 0x0000563f5cd6540b in ExecInterpExpr () at execExprInterp.c:1272\n#10 0x0000563f5cd73c5f in ExecEvalExprSwitchContext (isNull=0x7ffcf0daf3a7, econtext=0x563f5dd08a00, state=0x563f5dd0a270) at ../../../src/include/executor/executor.h:339\n#11 ExecQual (econtext=0x563f5dd08a00, state=0x563f5dd0a270) at ../../../src/include/executor/executor.h:408\n#12 ExecScan (node=0x563f5dd09328, accessMtd=0x563f5cd9e790 <SeqNext>, recheckMtd=0x563f5cd9e780 <SeqRecheck>) at execScan.c:227\n#13 0x0000563f5cd69f73 in ExecProcNode (node=0x563f5dd09328) at ../../../src/include/executor/executor.h:257\n#14 ExecutePlan (execute_once=<optimized out>, dest=0x563f5dd18a80, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, \n use_parallel_mode=<optimized out>, planstate=0x563f5dd09328, estate=0x563f5dd08790) at execMain.c:1600\n#15 standard_ExecutorRun () at execMain.c:410\n#16 0x0000563f5cf0460f in PortalRunSelect () at pquery.c:924\n#17 0x0000563f5cf05bf1 in PortalRun () at pquery.c:768\n#18 0x0000563f5cf019b2 in exec_simple_query () at postgres.c:1215\n#19 0x0000563f5cf0370a in PostgresMain () at postgres.c:4498\n#20 0x0000563f5ce6e479 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4594\n#21 BackendStartup (port=<optimized out>) at postmaster.c:4322\n#22 ServerLoop () at postmaster.c:1802\n#23 0x0000563f5ce6f47c in PostmasterMain () at postmaster.c:1474\n#24 0x0000563f5cb9a0c0 in main (argc=5, argv=0x563f5dc653f0) at main.c:198\n\nWhile reducing the query, I got a related error:\n\n\tSELECT 1 FROM pg_rewrite WHERE\n\tpg_get_function_arg_default(ev_class, 1) !~~\n\tpg_get_expr(ev_qual, 0, false);\n\nERROR: XX000: bogus varlevelsup: 0 offset 0\nLOCATION: get_variable, ruleutils.c:7003\n\nBoth errors are reproducible back to at least v10.\n\n\n", "msg_date": "Sun, 19 Dec 2021 14:54:22 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I reduced the problematic query to this.\n> SELECT 1 FROM pg_rewrite WHERE\n> pg_get_function_arg_default(ev_class, 1) !~~\n> pg_get_expr(ev_qual, ev_class, false);\n\nOr more simply,\n\nregression=# select pg_get_expr(ev_qual, ev_class, false) from pg_rewrite where rulename = 'pg_settings_u';\nERROR: bogus varno: 2\n\nI don't see anything particularly surprising here. pg_get_expr is only\nable to cope with expression trees over a single relation, but ON UPDATE\nrules can refer to both OLD and NEW relations. Maybe we could make the\nerror message more friendly, but there's not much else to be done,\nI think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Dec 2021 16:17:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Sun, Dec 19, 2021 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I reduced the problematic query to this.\n> > SELECT 1 FROM pg_rewrite WHERE\n> > pg_get_function_arg_default(ev_class, 1) !~~\n> > pg_get_expr(ev_qual, ev_class, false);\n>\n> Or more simply,\n>\n> regression=# select pg_get_expr(ev_qual, ev_class, false) from pg_rewrite where rulename = 'pg_settings_u';\n> ERROR: bogus varno: 2\n>\n> I don't see anything particularly surprising here. pg_get_expr is only\n> able to cope with expression trees over a single relation, but ON UPDATE\n> rules can refer to both OLD and NEW relations. Maybe we could make the\n> error message more friendly, but there's not much else to be done,\n> I think.\n\n+1 for making the error message more friendly.\n\n(We would certainly have a difficult time making it less friendly.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 09:49:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Dec 19, 2021 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't see anything particularly surprising here. pg_get_expr is only\n>> able to cope with expression trees over a single relation, but ON UPDATE\n>> rules can refer to both OLD and NEW relations. Maybe we could make the\n>> error message more friendly, but there's not much else to be done,\n>> I think.\n\n> +1 for making the error message more friendly.\n\nThe problem is that the spot where it's thrown doesn't have a lot of\ncontext. We can fix that by having pg_get_expr itself check for\nout-of-spec Vars before starting the recursion, which adds a bit of\noverhead but I don't think we're terribly concerned about that.\n\nI figured this would be just a quick hack in ruleutils.c, but was\ndismayed to find the regression tests falling over, because some\nbozo neglected to teach nodeFuncs.c about partition expressions.\nIt might be a good idea to back-patch that part, before we find\nsome other place that fails.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 20 Dec 2021 11:25:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Mon, Dec 20, 2021 at 11:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I figured this would be just a quick hack in ruleutils.c, but was\n> dismayed to find the regression tests falling over, because some\n> bozo neglected to teach nodeFuncs.c about partition expressions.\n> It might be a good idea to back-patch that part, before we find\n> some other place that fails.\n\nCalling people bozos isn't very nice. Please don't do that.\n\nThe commit that added PartitionBoundSpec and PartitionRangeDatum was\ncommitted by me and authored by Amit Langote. It is the original table\npartitioning commit -- f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63. I'm\nreasonably sure that the reason why those didn't get added to\nexpression_tree_walker is that they don't seem like something that can\never appear in an expression. I still don't understand why that's not\ntrue.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 13:00:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The commit that added PartitionBoundSpec and PartitionRangeDatum was\n> committed by me and authored by Amit Langote. It is the original table\n> partitioning commit -- f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63. I'm\n> reasonably sure that the reason why those didn't get added to\n> expression_tree_walker is that they don't seem like something that can\n> ever appear in an expression. I still don't understand why that's not\n> true.\n\nThe reason the regression tests fail if I only patch ruleutils is\nthat psql \\d on a partitioned table invokes\n\t... pg_get_expr(c.relpartbound, c.oid) FROM pg_catalog.pg_class c\nand evidently relpartbound does contain precisely these node types.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 13:13:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Mon, Dec 20, 2021 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The reason the regression tests fail if I only patch ruleutils is\n> that psql \\d on a partitioned table invokes\n> ... pg_get_expr(c.relpartbound, c.oid) FROM pg_catalog.pg_class c\n> and evidently relpartbound does contain precisely these node types.\n\nRight. I'm not surprised that relpartbound uses those node types. I\n*am* surprised that pg_get_expr() is expected to be able to handle\nthem. IOW, they ARE node trees, consonant with the fact that the\ncolumn type is pg_node_tree, but they're NOT expressions.\n\nIf we're going to have a policy that all node types stored in the\ncatalog should be supported by expression_tree_walker even if they're\nnot actually expressions, we ought to have a rather explicit comment\nabout that in the comments for expression_tree_walker, because\notherwise somebody might easily make this same mistake again.\nAlternatively, maybe pg_get_expr() should just fail and tell you that\nthis is not an expression, and if you want to see what's in that\ncolumn, you should use the SQL-callable functions specifically\nprovided for that purpose (pg_get_partkeydef, I think). I don't know\nwhy it should be legitimate for pg_get_expr() to just assume that any\nrandom node tree it gets handed must be an expression without doing\nany sanity checking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 13:50:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Right. I'm not surprised that relpartbound uses those node types. I\n> *am* surprised that pg_get_expr() is expected to be able to handle\n> them. IOW, they ARE node trees, consonant with the fact that the\n> column type is pg_node_tree, but they're NOT expressions.\n\nI'm not sure why you're astonished by that, considering that\npsql has applied pg_get_expr to relpartbound since f0e44751d,\nwhich was the same commit that put code into ruleutils.c to\nmake pg_get_expr work on relpartbounds.\n\nIt seems a bit late to change our minds on this; and anyway,\nif pg_get_expr didn't handle them, we'd just need to invent\nanother function that did.\n\n> Alternatively, maybe pg_get_expr() should just fail and tell you that\n> this is not an expression, and if you want to see what's in that\n> column, you should use the SQL-callable functions specifically\n> provided for that purpose (pg_get_partkeydef, I think).\n\npg_get_partkeydef does something different.\n\nregression=# select pg_get_expr(relpartbound,oid) from pg_class where relname = 'beta_neg';\n pg_get_expr \n----------------------------------\n FOR VALUES FROM ('-10') TO ('0')\n(1 row)\n\nregression=# select pg_get_partkeydef('beta_neg'::regclass);\n pg_get_partkeydef \n-------------------\n RANGE (b)\n(1 row)\n\n> I don't know\n> why it should be legitimate for pg_get_expr() to just assume that any\n> random node tree it gets handed must be an expression without doing\n> any sanity checking.\n\nIt does fall over if you try to apply it to stored rules:\n\nregression=# select pg_get_expr(ev_action, 0) from pg_rewrite;\nERROR: unrecognized node type: 232\n\nI'm not terribly excited about that, but maybe we should try to\nimprove it while we're here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 14:36:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Mon, Dec 20, 2021 at 2:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not sure why you're astonished by that, considering that\n> psql has applied pg_get_expr to relpartbound since f0e44751d,\n> which was the same commit that put code into ruleutils.c to\n> make pg_get_expr work on relpartbounds.\n>\n> It seems a bit late to change our minds on this; and anyway,\n> if pg_get_expr didn't handle them, we'd just need to invent\n> another function that did.\n\nOK.\n\n> > Alternatively, maybe pg_get_expr() should just fail and tell you that\n> > this is not an expression, and if you want to see what's in that\n> > column, you should use the SQL-callable functions specifically\n> > provided for that purpose (pg_get_partkeydef, I think).\n>\n> pg_get_partkeydef does something different.\n>\n> regression=# select pg_get_expr(relpartbound,oid) from pg_class where relname = 'beta_neg';\n> pg_get_expr\n> ----------------------------------\n> FOR VALUES FROM ('-10') TO ('0')\n> (1 row)\n>\n> regression=# select pg_get_partkeydef('beta_neg'::regclass);\n> pg_get_partkeydef\n> -------------------\n> RANGE (b)\n> (1 row)\n\nOK ... but my point is that dump and restore does work. So whatever\ncases pg_get_expr() doesn't work must be cases that aren't needed for\nthat to happen. Otherwise this problem would have been found long ago.\n\n> > I don't know\n> > why it should be legitimate for pg_get_expr() to just assume that any\n> > random node tree it gets handed must be an expression without doing\n> > any sanity checking.\n>\n> It does fall over if you try to apply it to stored rules:\n>\n> regression=# select pg_get_expr(ev_action, 0) from pg_rewrite;\n> ERROR: unrecognized node type: 232\n>\n> I'm not terribly excited about that, but maybe we should try to\n> improve it while we're here.\n\nIn my view, the lack of excitement about sanity checks in functions\nthat deal with node trees in the catalogs is the root of this problem.\nI realize that's a deep hole out of which we're unlikely to be able to\nclimb in the short or even medium term, but we don't have to keep\ndigging. We either make a rule that pg_get_expr() can apply to\neverything stored in the catalogs and produce sensible answers, which\nseems to be what you prefer, or we make it return nice errors for the\ncases that it can't handle nicely, or some combination of the two. And\nwhatever we decide, we also document and enforce everywhere.\n\nI don't think it's any more correct for pg_get_expr() to elog(ERROR,\n\"some internal thing\") than it would be for to_timestamp() or\ndate_bin() or whatever to do something similar. And I think that\ncareful thinking about supported cases makes life easier for both\nusers (who know that if they see some junk error report, it's a\nmistake rather than intentional) and for developers (who then have a\nbetter chance of knowing what code they need to update to avoid\ngetting called bozos). Sloppy thinking about which cases are supported\nand unsupported leads to bugs, and some of those are likely to be\nsecurity bugs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 15:17:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> OK ... but my point is that dump and restore does work. So whatever\n> cases pg_get_expr() doesn't work must be cases that aren't needed for\n> that to happen. Otherwise this problem would have been found long ago.\n\npg_get_expr doesn't (or didn't) depend on expression_tree_walker,\nso there wasn't a problem there before. I am worried that there\nmight be other code paths, now or in future, that could try to apply\nexpression_tree_walker/mutator to relpartbound trees, which is\nwhy I think it's a reasonable idea to teach them about such trees.\n\n>> It does fall over if you try to apply it to stored rules:\n>> regression=# select pg_get_expr(ev_action, 0) from pg_rewrite;\n>> ERROR: unrecognized node type: 232\n>> I'm not terribly excited about that, but maybe we should try to\n>> improve it while we're here.\n\n> In my view, the lack of excitement about sanity checks in functions\n> that deal with node trees in the catalogs is the root of this problem.\n\nIt's only a problem if you hold the opinion that there should be\nno user-reachable ERRCODE_INTERNAL_ERROR errors. Which is a fine\nideal, but I fear we're a pretty long way off from that.\n\n> I realize that's a deep hole out of which we're unlikely to be able to\n> climb in the short or even medium term, but we don't have to keep\n> digging. We either make a rule that pg_get_expr() can apply to\n> everything stored in the catalogs and produce sensible answers, which\n> seems to be what you prefer, or we make it return nice errors for the\n> cases that it can't handle nicely, or some combination of the two. And\n> whatever we decide, we also document and enforce everywhere.\n\nI think having pg_get_expr throw an error for a query, as opposed to an\nexpression, is fine. What I don't want to do is subdivide things a lot\nmore finely than that; thus lumping \"relpartbound\" into \"expression\"\nseems like a reasonable thing to do. Especially since we already did it\nsix years ago.\n\nIn a quick check of catalogs with pg_node_tree columns, I find these\nother columns that pg_get_expr can fail on (at least with the\nexamples available in the regression DB):\n\nregression=# select count(pg_get_expr(prosqlbody,0)) from pg_proc;\nERROR: unrecognized node type: 232\nregression=# select count(pg_get_expr(tgqual,tgrelid)) from pg_trigger ;\nERROR: bogus varno: 2\n\nSo that looks like the same cases we already knew about: input is\na querytree not an expression, or it contains Vars for more than\none relation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 16:20:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Here's a less hasty version of the patch.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 20 Dec 2021 18:17:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Tue, Dec 21, 2021 at 6:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > OK ... but my point is that dump and restore does work. So whatever\n> > cases pg_get_expr() doesn't work must be cases that aren't needed for\n> > that to happen. Otherwise this problem would have been found long ago.\n>\n> pg_get_expr doesn't (or didn't) depend on expression_tree_walker,\n> so there wasn't a problem there before. I am worried that there\n> might be other code paths, now or in future, that could try to apply\n> expression_tree_walker/mutator to relpartbound trees, which is\n> why I think it's a reasonable idea to teach them about such trees.\n>\n> > I realize that's a deep hole out of which we're unlikely to be able to\n> > climb in the short or even medium term, but we don't have to keep\n> > digging. We either make a rule that pg_get_expr() can apply to\n> > everything stored in the catalogs and produce sensible answers, which\n> > seems to be what you prefer, or we make it return nice errors for the\n> > cases that it can't handle nicely, or some combination of the two. And\n> > whatever we decide, we also document and enforce everywhere.\n>\n> I think having pg_get_expr throw an error for a query, as opposed to an\n> expression, is fine. What I don't want to do is subdivide things a lot\n> more finely than that; thus lumping \"relpartbound\" into \"expression\"\n> seems like a reasonable thing to do. Especially since we already did it\n> six years ago.\n\nI admit that it was an oversight on my part that relpartbound trees\nare not recognized by nodeFuncs.c. :-(\n\nThanks for addressing that in the patch you posted. I guess fixing\nonly expression_tree_walker/mutator() suffices for now, but curious to\nknow if it was intentional that you decided not to touch the following\nsites:\n\nexprCollation(): it would perhaps make sense to return the collation\nassigned to the 1st element of listdatums/lowerdatums/upperdatums,\nespecially given that transformPartitionBoundValue() does assign a\ncollation to the values in those lists based on the parent's partition\nkey specification.\n\nexprType(): could be handled similarly\n\nqueryjumble.c: JumbleExpr(): whose header comment says:\n\n * expression_tree_walker() does, and therefore it's coded to be as parallel\n * to that function as possible.\n * ...\n * Note: the reason we don't simply use expression_tree_walker() is that the\n * point of that function is to support tree walkers that don't care about\n * most tree node types, but here we care about all types. We should complain\n * about any unrecognized node type.\n\nor maybe not, because relpartbound contents ought never reach queryjumble.c?\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 15:43:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Thu, Jan 6, 2022 at 3:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Dec 21, 2021 at 6:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > OK ... but my point is that dump and restore does work. So whatever\n> > > cases pg_get_expr() doesn't work must be cases that aren't needed for\n> > > that to happen. Otherwise this problem would have been found long ago.\n> >\n> > pg_get_expr doesn't (or didn't) depend on expression_tree_walker,\n> > so there wasn't a problem there before. I am worried that there\n> > might be other code paths, now or in future, that could try to apply\n> > expression_tree_walker/mutator to relpartbound trees, which is\n> > why I think it's a reasonable idea to teach them about such trees.\n> >\n> > > I realize that's a deep hole out of which we're unlikely to be able to\n> > > climb in the short or even medium term, but we don't have to keep\n> > > digging. We either make a rule that pg_get_expr() can apply to\n> > > everything stored in the catalogs and produce sensible answers, which\n> > > seems to be what you prefer, or we make it return nice errors for the\n> > > cases that it can't handle nicely, or some combination of the two. And\n> > > whatever we decide, we also document and enforce everywhere.\n> >\n> > I think having pg_get_expr throw an error for a query, as opposed to an\n> > expression, is fine. What I don't want to do is subdivide things a lot\n> > more finely than that; thus lumping \"relpartbound\" into \"expression\"\n> > seems like a reasonable thing to do. Especially since we already did it\n> > six years ago.\n>\n> I admit that it was an oversight on my part that relpartbound trees\n> are not recognized by nodeFuncs.c. :-(\n>\n> Thanks for addressing that in the patch you posted. I guess fixing\n> only expression_tree_walker/mutator() suffices for now...\n\nAlso, I wondered if it might be a good idea to expand the comment\nabove NodeTag definition in nodes.h to tell someone adding new types\nto also look in nodeFuncs.c to check if any of the functions there\nneed to be updated.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 21:44:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Mon, Dec 20, 2021 at 4:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> pg_get_expr doesn't (or didn't) depend on expression_tree_walker,\n> so there wasn't a problem there before. I am worried that there\n> might be other code paths, now or in future, that could try to apply\n> expression_tree_walker/mutator to relpartbound trees, which is\n> why I think it's a reasonable idea to teach them about such trees.\n\nI agree that doing so is totally reasonable. I merely don't think that\nprevious failure to do so makes anyone a \"bozo\". It was far from\nobvious that it was required.\n\n> It's only a problem if you hold the opinion that there should be\n> no user-reachable ERRCODE_INTERNAL_ERROR errors. Which is a fine\n> ideal, but I fear we're a pretty long way off from that.\n\nI do hold that opinion, and I think we ought to work in that direction\neven if we can't hope to get there quickly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 09:37:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Thanks for addressing that in the patch you posted. I guess fixing\n> only expression_tree_walker/mutator() suffices for now, but curious to\n> know if it was intentional that you decided not to touch the following\n> sites:\n\n> exprCollation(): it would perhaps make sense to return the collation\n> assigned to the 1st element of listdatums/lowerdatums/upperdatums,\n> especially given that transformPartitionBoundValue() does assign a\n> collation to the values in those lists based on the parent's partition\n> key specification.\n\nBut each column could have a different collation, no? I do not\nthink it's sensible to pick one of those at random and claim\nthat's the collation of the whole thing. So throwing an error\nseems appropriate.\n\n> exprType(): could be handled similarly\n\nThe same, in spades. Anybody who is asking for \"the type\"\nof a relpartbound is misguided.\n\n> queryjumble.c: JumbleExpr(): whose header comment says:\n\nIf somebody needs that, I wouldn't object to adding support there.\nBut right now it would just be dead code, so why bother?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jan 2022 10:24:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" }, { "msg_contents": "On Fri, Jan 7, 2022 at 12:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Thanks for addressing that in the patch you posted. I guess fixing\n> > only expression_tree_walker/mutator() suffices for now, but curious to\n> > know if it was intentional that you decided not to touch the following\n> > sites:\n>\n> > exprCollation(): it would perhaps make sense to return the collation\n> > assigned to the 1st element of listdatums/lowerdatums/upperdatums,\n> > especially given that transformPartitionBoundValue() does assign a\n> > collation to the values in those lists based on the parent's partition\n> > key specification.\n>\n> But each column could have a different collation, no? I do not\n> think it's sensible to pick one of those at random and claim\n> that's the collation of the whole thing. So throwing an error\n> seems appropriate.\n>\n> > exprType(): could be handled similarly\n>\n> The same, in spades. Anybody who is asking for \"the type\"\n> of a relpartbound is misguided.\n\nOkay, agree there's no need for handling bound nodes in these\nfunctions. Most sites that need to see the collation/type OID for\nbound datums work directly with the individual elements of those lists\nanyway.\n\n> > queryjumble.c: JumbleExpr(): whose header comment says:\n>\n> If somebody needs that, I wouldn't object to adding support there.\n> But right now it would just be dead code, so why bother?\n\nSure, makes sense.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jan 2022 22:10:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith: ERROR: XX000: bogus varno: 2" } ]
[ { "msg_contents": "For some reason the current HEAD PublicationActions is a struct of\nboolean representing combinations of the 4 different \"publication\nactions\".\n\nI felt it is more natural to implement boolean flag combinations using\na bitmask instead of a struct of bools. IMO using the bitmask also\nsimplifies assignment and checking of said flags.\n\nPSA a small patch for this.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 20 Dec 2021 11:18:41 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "PublicationActions - use bit flags." }, { "msg_contents": "On Mon, Dec 20, 2021 at 11:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> For some reason the current HEAD PublicationActions is a struct of\n> boolean representing combinations of the 4 different \"publication\n> actions\".\n>\n> I felt it is more natural to implement boolean flag combinations using\n> a bitmask instead of a struct of bools. IMO using the bitmask also\n> simplifies assignment and checking of said flags.\n>\n> PSA a small patch for this.\n>\n> Thoughts?\n>\n\n+1\nI think the bit flags are a more natural fit, and also the patch\nremoves the unnecessary use of a palloc'd tiny struct to return the\nPublicationActions (which also currently isn't explicitly pfree'd).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 20 Dec 2021 16:10:23 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On 20.12.21 01:18, Peter Smith wrote:\n> For some reason the current HEAD PublicationActions is a struct of\n> boolean representing combinations of the 4 different \"publication\n> actions\".\n> \n> I felt it is more natural to implement boolean flag combinations using\n> a bitmask instead of a struct of bools. IMO using the bitmask also\n> simplifies assignment and checking of said flags.\n\nI don't see why this is better. It just makes the code longer and adds \nmore punctuation and reduces type safety.\n\n\n", "msg_date": "Mon, 20 Dec 2021 14:58:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 20.12.21 01:18, Peter Smith wrote:\n>> I felt it is more natural to implement boolean flag combinations using\n>> a bitmask instead of a struct of bools. IMO using the bitmask also\n>> simplifies assignment and checking of said flags.\n\n> I don't see why this is better. It just makes the code longer and adds \n> more punctuation and reduces type safety.\n\nIt makes the code shorter in places where you need to process all the\nflags at once, but I agree it's not really an improvement elsewhere.\nNot sure if it's worth changing.\n\nOne thing I noted is that the duplicate PublicationActions typedefs\nwill certainly draw warnings, if not hard errors, from some compilers.\nYou could get around that by removing the typedefs altogether and just\nusing \"int\", which'd be more consistent with our usual practices anyway.\nBut it does play into Peter's objection about type safety.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 11:56:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On 2021-Dec-20, Peter Eisentraut wrote:\n\n> I don't see why this is better. It just makes the code longer and adds more\n> punctuation and reduces type safety.\n\nRemoving one palloc is I think the most important consequence ...\nprobably not a big deal though.\n\nI think we could change the memcpy calls to struct assignment, as that\nwould look a bit cleaner, and call it a day.\n\nOne thing I would not like would be to change the catalog representation\nfrom bools into an integer. We do that for pg_trigger.tgflags (IIRC)\nand it is horrible.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 20 Dec 2021 14:14:01 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Tue, Dec 21, 2021 at 4:14 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Dec-20, Peter Eisentraut wrote:\n>\n> > I don't see why this is better. It just makes the code longer and adds more\n> > punctuation and reduces type safety.\n>\n> Removing one palloc is I think the most important consequence ...\n> probably not a big deal though.\n>\n> I think we could change the memcpy calls to struct assignment, as that\n> would look a bit cleaner, and call it a day.\n>\n\nI think we can all agree that returning PublicationActions as a\npalloc'd struct is unnecessary.\nI've attached a patch which addresses that and replaces a couple of\nmemcpy()s with struct assignment, as suggested.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Tue, 21 Dec 2021 11:19:16 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> I've attached a patch which addresses that and replaces a couple of\n> memcpy()s with struct assignment, as suggested.\n\nRemoving this is not good:\n\n \tif (relation->rd_pubactions)\n-\t{\n \t\tpfree(relation->rd_pubactions);\n-\t\trelation->rd_pubactions = NULL;\n-\t}\n \nIf the subsequent palloc fails, you've created a problem where\nthere was none before.\n\nI do wonder why we have to palloc a constant-size substructure in\nthe first place, especially one that is likely smaller than the\npointer that points to it. Maybe the struct definition should be\nmoved so that we can just declare it in-line in the relcache entry?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Dec 2021 19:56:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Tue, Dec 21, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Removing this is not good:\n>\n> if (relation->rd_pubactions)\n> - {\n> pfree(relation->rd_pubactions);\n> - relation->rd_pubactions = NULL;\n> - }\n>\n> If the subsequent palloc fails, you've created a problem where\n> there was none before.\n>\n\nOops, yeah, I got carried away; if palloc() failed and called exit(),\nthen it would end up crashing when trying to use/pfree rd_pubactions\nagain.\nBetter leave that line in ...\n\n> I do wonder why we have to palloc a constant-size substructure in\n> the first place, especially one that is likely smaller than the\n> pointer that points to it. Maybe the struct definition should be\n> moved so that we can just declare it in-line in the relcache entry?\n>\n\nI think currently it's effectively using the rd_pubactions pointer as\na boolean flag to indicate whether the publication membership info has\nbeen fetched (so the bool flags are valid).\nI guess you'd need another bool flag if you wanted to make that struct\nin-line in the relcache entry.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 21 Dec 2021 12:55:52 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Tue, Dec 21, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > I've attached a patch which addresses that and replaces a couple of\n> > memcpy()s with struct assignment, as suggested.\n>\n> Removing this is not good:\n>\n> if (relation->rd_pubactions)\n> - {\n> pfree(relation->rd_pubactions);\n> - relation->rd_pubactions = NULL;\n> - }\n>\n> If the subsequent palloc fails, you've created a problem where\n> there was none before.\n>\n> I do wonder why we have to palloc a constant-size substructure in\n> the first place, especially one that is likely smaller than the\n> pointer that points to it. Maybe the struct definition should be\n> moved so that we can just declare it in-line in the relcache entry?\n>\n\nAt the risk of flogging a dead horse, here is v2 of my original\nbit-flag replacement for the PublicationActions struct.\n\nThis version introduces one more bit flag for the relcache status, and\nby doing so means all that code for Relation cache PublicationActions\npointers and pallocs and context switches can just disappear...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 29 Dec 2021 15:21:36 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Mon, Dec 20, 2021 at 11:18:41AM +1100, Peter Smith wrote:\n> For some reason the current HEAD PublicationActions is a struct of\n> boolean representing combinations of the 4 different \"publication\n> actions\".\n> \n> I felt it is more natural to implement boolean flag combinations using\n> a bitmask instead of a struct of bools. IMO using the bitmask also\n> simplifies assignment and checking of said flags.\n\nPeter E made a suggestion to use a similar struct with bools last year for\nREINDEX.\nhttps://www.postgresql.org/message-id/7ec67c56-2377-cd05-51a0-691104404abe@enterprisedb.com\n\nAlvaro pointed out that the integer flags are better for ABI compatibility - it\nwould allow adding a new flag in backbranches, if that were needed for a\nbugfix.\n\nBut that's not very compelling, since the bools have existed in v10...\nAlso, the booleans directly correspond with the catalog.\n\n+ if (pubform->pubinsert) pub->pubactions |= PUBACTION_INSERT;\n\nThis is usually written like:\n\n\tpub->pubactions |= (pubform->pubinsert ? PUBACTION_INSERT : 0)\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Dec 2021 10:30:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Thu, Dec 30, 2021 at 3:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Dec 20, 2021 at 11:18:41AM +1100, Peter Smith wrote:\n> > For some reason the current HEAD PublicationActions is a struct of\n> > boolean representing combinations of the 4 different \"publication\n> > actions\".\n> >\n> > I felt it is more natural to implement boolean flag combinations using\n> > a bitmask instead of a struct of bools. IMO using the bitmask also\n> > simplifies assignment and checking of said flags.\n>\n> Peter E made a suggestion to use a similar struct with bools last year for\n> REINDEX.\n> https://www.postgresql.org/message-id/7ec67c56-2377-cd05-51a0-691104404abe@enterprisedb.com\n>\n> Alvaro pointed out that the integer flags are better for ABI compatibility - it\n> would allow adding a new flag in backbranches, if that were needed for a\n> bugfix.\n>\n> But that's not very compelling, since the bools have existed in v10...\n> Also, the booleans directly correspond with the catalog.\n>\n> + if (pubform->pubinsert) pub->pubactions |= PUBACTION_INSERT;\n>\n> This is usually written like:\n>\n> pub->pubactions |= (pubform->pubinsert ? PUBACTION_INSERT : 0)\n>\n\nThanks for the info, I've modified those assignment styles as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Thu, 30 Dec 2021 11:31:53 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Thu, Dec 30, 2021 at 3:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> + if (pubform->pubinsert) pub->pubactions |= PUBACTION_INSERT;\n>> This is usually written like:\n>> pub->pubactions |= (pubform->pubinsert ? PUBACTION_INSERT : 0)\n\n> Thanks for the info, I've modified those assignment styles as suggested.\n\nFWIW, I think it's utter nonsense to claim that the second way is\npreferred over the first. There may be some people who think\nthe second way is more legible, but I don't; and I'm pretty sure\nthat the first way is significantly more common in the PG codebase.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 19:36:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Tue, Dec 21, 2021 at 12:55 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Dec 21, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Removing this is not good:\n> >\n> > if (relation->rd_pubactions)\n> > - {\n> > pfree(relation->rd_pubactions);\n> > - relation->rd_pubactions = NULL;\n> > - }\n> >\n> > If the subsequent palloc fails, you've created a problem where\n> > there was none before.\n> >\n>\n> Oops, yeah, I got carried away; if palloc() failed and called exit(),\n> then it would end up crashing when trying to use/pfree rd_pubactions\n> again.\n> Better leave that line in ...\n>\n\nAttaching an updated patch to fix that oversight.\nThis patch thus fixes the original palloc issue in a minimal way,\nkeeping the same relcache structure.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Fri, 21 Jan 2022 11:05:55 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "\nOn 21.01.22 01:05, Greg Nancarrow wrote:\n> On Tue, Dec 21, 2021 at 12:55 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>>\n>> On Tue, Dec 21, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>> Removing this is not good:\n>>>\n>>> if (relation->rd_pubactions)\n>>> - {\n>>> pfree(relation->rd_pubactions);\n>>> - relation->rd_pubactions = NULL;\n>>> - }\n>>>\n>>> If the subsequent palloc fails, you've created a problem where\n>>> there was none before.\n>>>\n>>\n>> Oops, yeah, I got carried away; if palloc() failed and called exit(),\n>> then it would end up crashing when trying to use/pfree rd_pubactions\n>> again.\n>> Better leave that line in ...\n>>\n> \n> Attaching an updated patch to fix that oversight.\n> This patch thus fixes the original palloc issue in a minimal way,\n> keeping the same relcache structure.\n\nWhy can't GetRelationPublicationActions() have the PublicationActions as \na return value, instead of changing it to an output argument?\n\n\n", "msg_date": "Mon, 24 Jan 2022 21:31:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On Tue, Jan 25, 2022 at 7:31 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> Why can't GetRelationPublicationActions() have the PublicationActions as\n> a return value, instead of changing it to an output argument?\n\nThat would be OK too, for now, for the current (small size, typically\n4-byte) PublicationActions struct.\nBut if that function was extended in the future to return more publication\ninformation than just the PublicationActions struct (and I'm seeing that in\nthe filtering patches [1]), then using return-by-value won't be as\nefficient as pass-by-reference, and I'd tend to stick with\npass-by-reference in that case.\n\n[1]\nhttps://postgr.es/m/OS0PR01MB5716B899A66D2997EF28A1B3945F9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Tue, Jan 25, 2022 at 7:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> Why can't GetRelationPublicationActions() have the PublicationActions as> a return value, instead of changing it to an output argument?That would be OK too, for now, for the current (small size, typically 4-byte) PublicationActions struct.But if that function was extended in the future to return more publication information than just the PublicationActions struct (and I'm seeing that in the filtering patches [1]), then using return-by-value won't be as efficient as pass-by-reference, and I'd tend to stick with pass-by-reference in that case.[1]    https://postgr.es/m/OS0PR01MB5716B899A66D2997EF28A1B3945F9%40OS0PR01MB5716.jpnprd01.prod.outlook.comRegards,Greg NancarrowFujitsu Australia", "msg_date": "Tue, 25 Jan 2022 17:14:58 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." }, { "msg_contents": "On 25.01.22 07:14, Greg Nancarrow wrote:\n> \n> On Tue, Jan 25, 2022 at 7:31 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> >\n> > Why can't GetRelationPublicationActions() have the PublicationActions as\n> > a return value, instead of changing it to an output argument?\n> \n> That would be OK too, for now, for the current (small size, typically \n> 4-byte) PublicationActions struct.\n> But if that function was extended in the future to return more \n> publication information than just the PublicationActions struct (and I'm \n> seeing that in the filtering patches [1]), then using return-by-value \n> won't be as efficient as pass-by-reference, and I'd tend to stick with \n> pass-by-reference in that case.\n> \n> [1] \n> https://postgr.es/m/OS0PR01MB5716B899A66D2997EF28A1B3945F9%40OS0PR01MB5716.jpnprd01.prod.outlook.com \n> <https://postgr.es/m/OS0PR01MB5716B899A66D2997EF28A1B3945F9%40OS0PR01MB5716.jpnprd01.prod.outlook.com>\n\nBy itself, this refactoring doesn't seem worth it. The code is actually \nlonger at the end, and we haven't made it any more extensible or \nanything. And AFAICT, this is not called in a performance-sensitive way.\n\nThe proposed changes in [1] change this function more significantly, so \nadopting the present change wouldn't really help there either except \ncreate the need for one more rebase.\n\nSo I think we should leave this alone here and let [1] make the changes \nit needs.\n\n\n", "msg_date": "Tue, 25 Jan 2022 11:54:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PublicationActions - use bit flags." } ]
[ { "msg_contents": "Hi,\n\nWhen reviewing some replica identity related patches, I found that when adding\nprimary key using an existing unique index on not null columns, the\ntarget table's relcache won't be invalidated.\n\nThis would cause an error When REPLICA IDENTITY is default and we are\nUPDATE/DELETE a published table , because we will refer to\nRelationData::rd_pkindex to check if the UPDATE or DELETE can be safety\nexecuted in this case.\n\n---reproduction steps\nCREATE TABLE test(a int not null, b int);\nCREATE UNIQUE INDEX a ON test(a);\nCREATE PUBLICATION PUB for TABLE test;\nUPDATE test SET a = 2;\n\tERROR: cannot update table \"test\" because it does not have a replica identity and publishes updates\n\tHINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n\nalter table test add primary key using index a;\nUPDATE test SET a = 2;\n\tERROR: cannot update table \"test\" because it does not have a replica identity and publishes updates\n\tHINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n---\n\nI think the bug exists in HEAD ~ PG11 after the commit(f66e8bf) which remove\nrelhaspkey from pg_class. In PG10, when adding a primary key, it will always\nupdate the relhaspkey in pg_class which will invalidate the relcache, so it\nwas OK.\n\nI tried to write a patch to fix this by invalidating the relcache after marking\nprimary key in index_constraint_create().\n\nBest regards,\nHou zj", "msg_date": "Mon, 20 Dec 2021 06:45:33 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "relcache not invalidated when ADD PRIMARY KEY USING INDEX" }, { "msg_contents": "On Mon, Dec 20, 2021, at 3:45 AM, houzj.fnst@fujitsu.com wrote:\n> Hi,\n> \n> When reviewing some replica identity related patches, I found that when adding\n> primary key using an existing unique index on not null columns, the\n> target table's relcache won't be invalidated.\nGood catch.\n\nIt seems you can simplify your checking indexForm->indisprimary directly, no?\n\nWhy did you add new tests for test_decoding? I think the TAP tests alone are\nfine. BTW, this test is similar to publisher3/subscriber3. Isn't it better to\nuse the same pub/sub to reduce the test execution time?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Dec 20, 2021, at 3:45 AM, houzj.fnst@fujitsu.com wrote:Hi,When reviewing some replica identity related patches, I found that when addingprimary key using an existing unique index on not null columns, thetarget table's relcache won't be invalidated.Good catch.It seems you can simplify your checking indexForm->indisprimary directly, no?Why did you add new tests for test_decoding? I think the TAP tests alone arefine. BTW, this test is similar to publisher3/subscriber3. Isn't it better touse the same pub/sub to reduce the test execution time?--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 20 Dec 2021 19:28:30 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: relcache not invalidated when ADD PRIMARY KEY USING INDEX" }, { "msg_contents": "Hi,\n\nOn Mon, Dec 20, 2021 at 06:45:33AM +0000, houzj.fnst@fujitsu.com wrote:\n> \n> I tried to write a patch to fix this by invalidating the relcache after marking\n> primary key in index_constraint_create().\n\nThe cfbot reports that you have a typo in your regression tests:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4806573636714496/regress_diffs/contrib/test_decoding/regression.diffs\ndiff -U3 /tmp/cirrus-ci-build/contrib/test_decoding/expected/ddl.out /tmp/cirrus-ci-build/contrib/test_decoding/results/ddl.out\n--- /tmp/cirrus-ci-build/contrib/test_decoding/expected/ddl.out\t2022-01-11 16:46:51.684727957 +0000\n+++ /tmp/cirrus-ci-build/contrib/test_decoding/results/ddl.out\t2022-01-11 16:50:35.584089784 +0000\n@@ -594,7 +594,7 @@\n DELETE FROM table_dropped_index_no_pk WHERE b = 1;\n DELETE FROM table_dropped_index_no_pk WHERE a = 3;\n DROP TABLE table_dropped_index_no_pk;\n--- check table with newly added primary key\n+-- check tables with newly added primary key\n\nCould you send an updated patch? As far as I can see the rest of the\nregression tests succeed so the patch can probably still be reviewed. That\nbeing said, there are pending comments by Euler so I think it's appropriate to\nzsh:1: command not found: q\n\n\n", "msg_date": "Fri, 14 Jan 2022 14:34:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: relcache not invalidated when ADD PRIMARY KEY USING INDEX" }, { "msg_contents": "\"Euler Taveira\" <euler@eulerto.com> writes:\n> On Mon, Dec 20, 2021, at 3:45 AM, houzj.fnst@fujitsu.com wrote:\n>> When reviewing some replica identity related patches, I found that when adding\n>> primary key using an existing unique index on not null columns, the\n>> target table's relcache won't be invalidated.\n\n> Good catch.\n\nIndeed.\n\n> It seems you can simplify your checking indexForm->indisprimary directly, no?\n\nThe logic seems fine to me --- it avoids an unnecessary cache flush\nif the index was already the pkey. (Whether we actually reach this\ncode in such a case, I dunno, but it's not costing much to be smart\nif we are.)\n\n> Why did you add new tests for test_decoding? I think the TAP tests alone are\n> fine. BTW, this test is similar to publisher3/subscriber3. Isn't it better to\n> use the same pub/sub to reduce the test execution time?\n\nI agree, the proposed test cases are expensive overkill. The repro\nshown in the original message is sufficient and much cheaper.\nMoreover, we already have some test cases very much like that in\nregress/sql/publication.sql, so that's where I put it.\n\nPushed with some minor cosmetic adjustments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Jan 2022 13:38:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: relcache not invalidated when ADD PRIMARY KEY USING INDEX" } ]
[ { "msg_contents": "When reverse-compiling a query, ruleutils.c has some complicated code to \nhandle the join output columns of a JOIN USING join. There used to be \nno way to qualify those columns, and so if there was a naming conflict \nanywhere in the query, those output columns had to be renamed to be \nunique throughout the query.\n\nSince PostgreSQL 14, we have a new feature that allows adding an alias \nto a JOIN USING clause. This provides a better solution to this \nproblem. This patch changes the logic in ruleutils.c so that if naming \nconflicts with JOIN USING output columns are found in the query, these \nJOIN USING aliases with generated names are attached everywhere and the \ncolumns are then qualified everywhere.\n\nI made it so that new JOIN USING aliases are only created if needed in \nthe query, since we already have the logic of has_dangerous_join_using() \nto compute when that is needed. We could probably do away with that too \nand always use them, but I think that would be surprising and not what \npeople want.\n\nThe regression test changes illustrate the effects very well.\n\nThis is PoC-level right now. You will find blatant code duplication in \nset_rtable_names(), and for now I have only commented out some code that \ncould be removed, not actually removed it.", "msg_date": "Mon, 20 Dec 2021 12:30:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Use JOIN USING aliases in ruleutils.c" } ]
[ { "msg_contents": "Hello, hackers.\nI have the same question. \nHope to reply, thanks.\n\n-----Original Message-----\nFrom: liuhuailing@fujitsu.com <liuhuailing@fujitsu.com> \nSent: Tuesday, December 14, 2021 4:55 PM\nTo: pgsql-hackers@postgresql.org\nSubject: Question about HEAP_XMIN_COMMITTED\n\nHi\n\nI did the following steps on PG.\n\n1. Building a synchronous streaming replication environment.\n2. Executing the following SQL statements on primary\n (1) postgres=# CREATE EXTENSION pageinspect;\n (2) postgres=# begin;\n (3) postgres=# select txid_current();\n (4) postgres=# create table mytest6(i int);\n (6) postgres=# insert into mytest6 values(1);\n (7) postgres=# commit;\n3. Executing the following SQL statements on standby\n (8) postgres=# select * from mytest6;\n i \n ---\n 1\n (1 row)\n (9) postgres=# SELECT t_infomask FROM heap_page_items(get_raw_page('pg_class', 0)) where t_xmin=502※;\n   t_infomask \n ------------\n 2049\n (1 row)\n ※502 is the transaction ID returned by step (3) above.\n\nIn the result of step (9),the value of the t_infomask field is 2049(0x801) which means that HEAP_XMAX_INVALID and HEAP_HASNULL flags were setted, but HEAP_XMIN_COMMITTED flag was not setted.\n\nAccording to source , when step (8) was executed,SetHintBits function were called to set HEAP_XMIN_COMMITTED.\nhowever, the minRecoveryPoint value was not updated. So HEAP_XMIN_COMMITTED flag was not setted successfully.\n\nAfter CheckPoint, select from mytest6 again in another session, we can see HEAP_XMIN_COMMITTED flag was setted.\n\nSo my question is that before checkpoint, HEAP_XMIN_COMMITTED flag was not setted correctly, right?\n\nOr we need to move minRecoveryPoint forword to make HEAP_XMIN_COMMITTED flag setted correctly when first select from mytest6.\n\n\nBest Regards, LiuHuailing\n--\n以上\nLiu Huailing\n--------------------------------------------------\nLiu Huailing\nDevelopment Department III\nSoftware Division II\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-8439\nCOINS: 7998-8439\nFAX : +86+25-83317685\nMAIL : liuhuailing@cn.fujitsu.com\n--------------------------------------------------\n\n\n\n\n\n\n \nHello, hackers.I have the same question. Hope to reply, thanks.\n-----Original Message-----\nFrom: liuhuailing@fujitsu.com <liuhuailing@fujitsu.com> \nSent: Tuesday, December 14, 2021 4:55 PM\nTo: pgsql-hackers@postgresql.org\nSubject: Question about HEAP_XMIN_COMMITTED\n\nHi\n\nI did the following steps on PG.\n\n1. Building a synchronous streaming replication environment.\n2. Executing the following SQL statements on primary\n (1) postgres=# CREATE EXTENSION pageinspect;\n (2) postgres=# begin;\n (3) postgres=# select txid_current();\n (4) postgres=# create table mytest6(i int);\n (6) postgres=# insert into mytest6 values(1);\n (7) postgres=# commit;\n3. Executing the following SQL statements on standby\n (8) postgres=# select * from mytest6;\n i \n ---\n 1\n (1 row)\n (9) postgres=# SELECT t_infomask FROM heap_page_items(get_raw_page('pg_class', 0)) where t_xmin=502※;\n   t_infomask \n ------------\n 2049\n (1 row)\n ※502 is the transaction ID returned by step (3) above.\n\nIn the result of step (9),the value of the t_infomask field is 2049(0x801) which means that HEAP_XMAX_INVALID and HEAP_HASNULL flags were setted, but HEAP_XMIN_COMMITTED flag was not setted.\n\nAccording to source , when step (8) was executed,SetHintBits function were called to set HEAP_XMIN_COMMITTED.\nhowever, the minRecoveryPoint value was not updated. So HEAP_XMIN_COMMITTED flag was not setted successfully.\n\nAfter CheckPoint, select from mytest6 again in another session, we can see HEAP_XMIN_COMMITTED flag was setted.\n\nSo my question is that before checkpoint, HEAP_XMIN_COMMITTED flag was not setted correctly, right?\n\nOr we need to move minRecoveryPoint forword to make HEAP_XMIN_COMMITTED flag setted correctly when first select from mytest6.\n\n\nBest Regards, LiuHuailing\n--\n以上\nLiu Huailing\n--------------------------------------------------\nLiu Huailing\nDevelopment Department III\nSoftware Division II\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-8439\nCOINS: 7998-8439\nFAX : +86+25-83317685\nMAIL : liuhuailing@cn.fujitsu.com\n--------------------------------------------------", "msg_date": "Tue, 21 Dec 2021 12:01:52 +0800 (GMT+08:00)", "msg_from": "haiming <sun373738013@163.com>", "msg_from_op": true, "msg_subject": "FW: Question about HEAP_XMIN_COMMITTED" } ]
[ { "msg_contents": "While testing the below case with the hot standby setup (with the\nlatest code), I have noticed that the checkpointer process crashed\nwith the $subject error. As per my observation, we have registered the\nSYNC_REQUEST when inserting some tuple into the table, and later on\nALTER SET TABLESPACE we have registered the SYNC_UNLINK_REQUEST, which\nlooks fine so far, then I have noticed that only when the standby is\nconnected the underlying table file w.r.t the old tablespace is\nalready deleted. Now, in AbsorbFsyncRequests we don't do anything for\nthe SYNC_REQUEST even though we have SYNC_UNLINK_REQUEST for the same\nfile, but since the underlying file is already deleted the\ncheckpointer cashed while processing the SYNC_REQUEST.\n\nI have spent some time on this but could not figure out how the\nrelfilenodenode file w.r.t. to the old tablespace is getting deleted\nand if I disconnect the standby then it is not getting deleted, not\nsure how walsender is playing a role in deleting the file even before\ncheckpointer process the unlink request.\n\npostgres[8905]=# create tablespace tab location\n'/home/dilipkumar/work/PG/install/bin/test';\nCREATE TABLESPACE\npostgres[8905]=# create tablespace tab1 location\n'/home/dilipkumar/work/PG/install/bin/test1';\nCREATE TABLESPACE\npostgres[8905]=# create database test tablespace tab;\nCREATE DATABASE\npostgres[8905]=# \\c test\nYou are now connected to database \"test\" as user \"dilipkumar\".\ntest[8912]=# create table t( a int PRIMARY KEY,b text);\nCREATE TABLE\ntest[8912]=# insert into t values (generate_series(1,10), 'aaa');\nINSERT 0 10\ntest[8912]=# alter table t set tablespace tab1 ;\nALTER TABLE\ntest[8912]=# CHECKPOINT ;\nWARNING: 57P02: terminating connection because of crash of another\nserver process\n\nlog shows:\nPANIC: could not fsync file\n\"pg_tblspc/16384/PG_15_202112131/16386/16387\": No such file or\ndirectory\n\nbacktrace:\n#0 0x00007f2f865ff387 in raise () from /lib64/libc.so.6\n#1 0x00007f2f86600a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000b13da3 in errfinish (filename=0xcf283f \"sync.c\", ..\n#3 0x0000000000978dc7 in ProcessSyncRequests () at sync.c:439\n#4 0x00000000005949d2 in CheckPointGuts (checkPointRedo=67653624,\nflags=108) at xlog.c:9590\n#5 0x00000000005942fe in CreateCheckPoint (flags=108) at xlog.c:9318\n#6 0x00000000008a80b7 in CheckpointerMain () at checkpointer.c:444\n\nNote: This smaller test case is derived from one of the bigger\nscenarios raised by Neha Sharma [1]\n\n[1]https://www.postgresql.org/message-id/CANiYTQs0E8TcB11eU0C4eNN0tUd%3DSQqsqEtL1AVZP1%3DEnD-49A%40mail.gmail.com\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Dec 2021 16:47:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Checkpointer crashes with \"PANIC: could not fsync file \"pg_tblspc/..\"" }, { "msg_contents": "This is happening because in the case of the primary server, we let the\ncheckpointer process to unlink the first segment of the rel file but\nthat's not the case with the standby server. In case of standby, the\nstartup/recovery process unlinks the first segment of the rel file\nimmediately during WAL replay. Now, in this case as the tablespace path is\nshared between the primary and standby servers, when the startup process\nunlinks the first segment of the rel file, it gets unlinked/deleted for the\nprimary server as well. So, when we run the checkpoint on the primary\nserver the subsequent fsync fails with the error \"No such file..\" which\ncauses the database system to PANIC.\n\nPlease have a look at the code below in mdunlinkfork().\n\n if (isRedo || forkNum != MAIN_FORKNUM ||\nRelFileNodeBackendIsTemp(rnode))\n {\n if (!RelFileNodeBackendIsTemp(rnode))\n {\n /* Prevent other backends' fds from holding on to the disk\nspace */\n ret = do_truncate(path);\n\n /* Forget any pending sync requests for the first segment */\n register_forget_request(rnode, forkNum, 0 /* first seg */ );\n }\n else\n ret = 0;\n\n /* Next unlink the file, unless it was already found to be missing\n*/\n if (ret == 0 || errno != ENOENT)\n {\n ret = unlink(path);\n if (ret < 0 && errno != ENOENT)\n ereport(WARNING,\n (errcode_for_file_access(),\n errmsg(\"could not remove file \\\"%s\\\": %m\", path)));\n }\n }\n else\n {\n /* Prevent other backends' fds from holding on to the disk space */\n ret = do_truncate(path);\n\n /* Register request to unlink first segment later */\n register_unlink_segment(rnode, forkNum, 0 /* first seg */ );\n }\n\n==\n\nIs it okay to share the same tablespace (infact relfile) between the\nprimary and standby server? Perhaps NO.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Tue, Dec 21, 2021 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> While testing the below case with the hot standby setup (with the\n> latest code), I have noticed that the checkpointer process crashed\n> with the $subject error. As per my observation, we have registered the\n> SYNC_REQUEST when inserting some tuple into the table, and later on\n> ALTER SET TABLESPACE we have registered the SYNC_UNLINK_REQUEST, which\n> looks fine so far, then I have noticed that only when the standby is\n> connected the underlying table file w.r.t the old tablespace is\n> already deleted. Now, in AbsorbFsyncRequests we don't do anything for\n> the SYNC_REQUEST even though we have SYNC_UNLINK_REQUEST for the same\n> file, but since the underlying file is already deleted the\n> checkpointer cashed while processing the SYNC_REQUEST.\n>\n> I have spent some time on this but could not figure out how the\n> relfilenodenode file w.r.t. to the old tablespace is getting deleted\n> and if I disconnect the standby then it is not getting deleted, not\n> sure how walsender is playing a role in deleting the file even before\n> checkpointer process the unlink request.\n>\n> postgres[8905]=# create tablespace tab location\n> '/home/dilipkumar/work/PG/install/bin/test';\n> CREATE TABLESPACE\n> postgres[8905]=# create tablespace tab1 location\n> '/home/dilipkumar/work/PG/install/bin/test1';\n> CREATE TABLESPACE\n> postgres[8905]=# create database test tablespace tab;\n> CREATE DATABASE\n> postgres[8905]=# \\c test\n> You are now connected to database \"test\" as user \"dilipkumar\".\n> test[8912]=# create table t( a int PRIMARY KEY,b text);\n> CREATE TABLE\n> test[8912]=# insert into t values (generate_series(1,10), 'aaa');\n> INSERT 0 10\n> test[8912]=# alter table t set tablespace tab1 ;\n> ALTER TABLE\n> test[8912]=# CHECKPOINT ;\n> WARNING: 57P02: terminating connection because of crash of another\n> server process\n>\n> log shows:\n> PANIC: could not fsync file\n> \"pg_tblspc/16384/PG_15_202112131/16386/16387\": No such file or\n> directory\n>\n> backtrace:\n> #0 0x00007f2f865ff387 in raise () from /lib64/libc.so.6\n> #1 0x00007f2f86600a78 in abort () from /lib64/libc.so.6\n> #2 0x0000000000b13da3 in errfinish (filename=0xcf283f \"sync.c\", ..\n> #3 0x0000000000978dc7 in ProcessSyncRequests () at sync.c:439\n> #4 0x00000000005949d2 in CheckPointGuts (checkPointRedo=67653624,\n> flags=108) at xlog.c:9590\n> #5 0x00000000005942fe in CreateCheckPoint (flags=108) at xlog.c:9318\n> #6 0x00000000008a80b7 in CheckpointerMain () at checkpointer.c:444\n>\n> Note: This smaller test case is derived from one of the bigger\n> scenarios raised by Neha Sharma [1]\n>\n> [1]\n> https://www.postgresql.org/message-id/CANiYTQs0E8TcB11eU0C4eNN0tUd%3DSQqsqEtL1AVZP1%3DEnD-49A%40mail.gmail.com\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nThis is happening because in the case of the primary server, we let the checkpointer process to unlink the first segment of the rel file but that's not the case with the standby server. In case of standby, the startup/recovery process unlinks the first segment of the rel file immediately during WAL replay. Now, in this case as the tablespace path is shared between the primary and standby servers, when the startup process unlinks the first segment of the rel file, it gets unlinked/deleted for the primary server as well. So, when we run the checkpoint on the primary server the subsequent fsync fails with the error \"No such file..\" which causes the database system to PANIC.Please have a look at the code below in mdunlinkfork().    if (isRedo || forkNum != MAIN_FORKNUM || RelFileNodeBackendIsTemp(rnode))    {        if (!RelFileNodeBackendIsTemp(rnode))        {            /* Prevent other backends' fds from holding on to the disk space */            ret = do_truncate(path);            /* Forget any pending sync requests for the first segment */            register_forget_request(rnode, forkNum, 0 /* first seg */ );        }        else            ret = 0;        /* Next unlink the file, unless it was already found to be missing */        if (ret == 0 || errno != ENOENT)        {            ret = unlink(path);            if (ret < 0 && errno != ENOENT)                ereport(WARNING,                        (errcode_for_file_access(),                         errmsg(\"could not remove file \\\"%s\\\": %m\", path)));        }    }    else    {        /* Prevent other backends' fds from holding on to the disk space */        ret = do_truncate(path);        /* Register request to unlink first segment later */        register_unlink_segment(rnode, forkNum, 0 /* first seg */ );    } ==Is it okay to share the same tablespace (infact relfile) between the primary and standby server? Perhaps NO.--With Regards,Ashutosh Sharma.On Tue, Dec 21, 2021 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:While testing the below case with the hot standby setup (with the\nlatest code), I have noticed that the checkpointer process crashed\nwith the $subject error. As per my observation, we have registered the\nSYNC_REQUEST when inserting some tuple into the table, and later on\nALTER SET TABLESPACE we have registered the SYNC_UNLINK_REQUEST, which\nlooks fine so far, then I have noticed that only when the standby is\nconnected the underlying table file w.r.t the old tablespace is\nalready deleted.  Now, in AbsorbFsyncRequests we don't do anything for\nthe SYNC_REQUEST even though we have SYNC_UNLINK_REQUEST for the same\nfile, but since the underlying file is already deleted the\ncheckpointer cashed while processing the SYNC_REQUEST.\n\nI have spent some time on this but could not figure out how the\nrelfilenodenode file w.r.t. to the old tablespace is getting deleted\nand if I disconnect the standby then it is not getting deleted, not\nsure how walsender is playing a role in deleting the file even before\ncheckpointer process the unlink request.\n\npostgres[8905]=# create tablespace tab location\n'/home/dilipkumar/work/PG/install/bin/test';\nCREATE TABLESPACE\npostgres[8905]=# create tablespace tab1 location\n'/home/dilipkumar/work/PG/install/bin/test1';\nCREATE TABLESPACE\npostgres[8905]=# create database test tablespace tab;\nCREATE DATABASE\npostgres[8905]=# \\c test\nYou are now connected to database \"test\" as user \"dilipkumar\".\ntest[8912]=# create table t( a int PRIMARY KEY,b text);\nCREATE TABLE\ntest[8912]=# insert into t values (generate_series(1,10), 'aaa');\nINSERT 0 10\ntest[8912]=# alter table t set tablespace tab1 ;\nALTER TABLE\ntest[8912]=# CHECKPOINT ;\nWARNING:  57P02: terminating connection because of crash of another\nserver process\n\nlog shows:\nPANIC:  could not fsync file\n\"pg_tblspc/16384/PG_15_202112131/16386/16387\": No such file or\ndirectory\n\nbacktrace:\n#0  0x00007f2f865ff387 in raise () from /lib64/libc.so.6\n#1  0x00007f2f86600a78 in abort () from /lib64/libc.so.6\n#2  0x0000000000b13da3 in errfinish (filename=0xcf283f \"sync.c\", ..\n#3  0x0000000000978dc7 in ProcessSyncRequests () at sync.c:439\n#4  0x00000000005949d2 in CheckPointGuts (checkPointRedo=67653624,\nflags=108) at xlog.c:9590\n#5  0x00000000005942fe in CreateCheckPoint (flags=108) at xlog.c:9318\n#6  0x00000000008a80b7 in CheckpointerMain () at checkpointer.c:444\n\nNote: This smaller test case is derived from one of the bigger\nscenarios raised by Neha Sharma [1]\n\n[1]https://www.postgresql.org/message-id/CANiYTQs0E8TcB11eU0C4eNN0tUd%3DSQqsqEtL1AVZP1%3DEnD-49A%40mail.gmail.com\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Dec 2021 00:28:46 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Checkpointer crashes with \"PANIC: could not fsync file\n \"pg_tblspc/..\"" }, { "msg_contents": "On Wed, 22 Dec 2021 at 12:28 AM, Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n>\n> Is it okay to share the same tablespace (infact relfile) between the\n> primary and standby server? Perhaps NO.\n>\n\n> Oops, yeah absolutely they can never share the tablespace path. So we can\nignore this issue.\n\n—\nDilip\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 22 Dec 2021 at 12:28 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Is it okay to share the same tablespace (infact relfile) between the primary and standby server? Perhaps NO.\nOops, yeah absolutely they can never share the tablespace path.  So we can ignore this issue.—Dilip-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Dec 2021 07:20:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Checkpointer crashes with \"PANIC: could not fsync file\n \"pg_tblspc/..\"" }, { "msg_contents": "On Wed, Dec 22, 2021 at 7:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, 22 Dec 2021 at 12:28 AM, Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n>\n>>\n>> Is it okay to share the same tablespace (infact relfile) between the\n>> primary and standby server? Perhaps NO.\n>>\n>\n>> Oops, yeah absolutely they can never share the tablespace path. So we\n> can ignore this issue.\n>\n\nThat's right. I agree. thanks.!\n\n--\nWith Regards,\nAshutosh sharma.\n\nOn Wed, Dec 22, 2021 at 7:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Wed, 22 Dec 2021 at 12:28 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Is it okay to share the same tablespace (infact relfile) between the primary and standby server? Perhaps NO.\nOops, yeah absolutely they can never share the tablespace path.  So we can ignore this issue.That's right. I agree. thanks.!--With Regards,Ashutosh sharma.", "msg_date": "Wed, 22 Dec 2021 14:00:48 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Checkpointer crashes with \"PANIC: could not fsync file\n \"pg_tblspc/..\"" } ]
[ { "msg_contents": "This patch needs another close pass and possibly some refactoring to \navoid copy-and-paste, but I'm putting this out here, since people are \nalready testing with Python 3.11 and will surely run into this problem.\n\nThe way plpy.commit() and plpy.rollback() handle errors is not ideal. \nThey end up just longjmping back to the main loop, without telling the \nPython interpreter about it. This hasn't been a problem so far, \napparently, but with Python 3.11-to-be, this appears to cause corruption \nin the state of the Python interpreter. This is readily reproducible \nand will cause crashes in the plpython_transaction test.\n\nThe fix is that we need to catch the PostgreSQL error and turn it into a \nPython exception, like we do for other places where plpy.* methods call \ninto PostgreSQL internals.", "msg_date": "Wed, 22 Dec 2021 09:24:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "fix crash with Python 3.11" }, { "msg_contents": "On Wed, 2021-12-22 at 09:24 +0100, Peter Eisentraut wrote:\r\n> The fix is that we need to catch the PostgreSQL error and turn it into a \r\n> Python exception, like we do for other places where plpy.* methods call \r\n> into PostgreSQL internals.\r\n\r\nTested on Ubuntu 20.04, with 3.11.0a3 (built by hand) and 3.8.10 (from\r\nthe repositories). The tests pass, so LGTM. I agree that tidying up\r\nsome of the code duplication would be nice.\r\n\r\nThanks,\r\n--Jacob\r\n", "msg_date": "Tue, 11 Jan 2022 18:38:10 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Hello,\n\nI run vcregeress plcheck on Windows Server 2016 with python 3.11.0a3 and python 3.9.7 which are installed using installer.\nAll tests are passed. I'm not familiar with PL/Python but I think it's good.\n\nregards, sho kato\n\n\n\n\n\n\n\n\nHello,\n\n\nI run vcregeress plcheck on Windows Server 2016 with python 3.11.0a3 and python 3.9.7 which are installed using installer.\nAll tests are passed. I'm not familiar with PL/Python but I think it's good.\n\n\nregards, sho kato", "msg_date": "Fri, 14 Jan 2022 05:30:27 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The way plpy.commit() and plpy.rollback() handle errors is not ideal. \n> They end up just longjmping back to the main loop, without telling the \n> Python interpreter about it. This hasn't been a problem so far, \n> apparently, but with Python 3.11-to-be, this appears to cause corruption \n> in the state of the Python interpreter. This is readily reproducible \n> and will cause crashes in the plpython_transaction test.\n> The fix is that we need to catch the PostgreSQL error and turn it into a \n> Python exception, like we do for other places where plpy.* methods call \n> into PostgreSQL internals.\n\nI agree that the existing code is broken and works only accidentally,\nbut I fear this patch does little to improve matters. In particular,\nit returns control to Python without having done anything to clean\nup the now-invalid state of the transaction system. (That is,\nthere's nothing analogous to PLy_spi_subtransaction_abort's\nRollbackAndReleaseCurrentSubTransaction call in these new PG_CATCH\nblocks). The existing test cases apparently fail to trip over that\nbecause Python just throws the exception again, but what if someone\nwrites a plpython function that catches the exception and then tries\nto perform new SPI actions?\n\nI think a possible fix is:\n\n1. Before entering the PG_TRY block, check for active subtransaction(s)\nand immediately throw a Python error if there is one. (This corresponds\nto the existing errors \"cannot commit while a subtransaction is active\"\nand \"cannot roll back while a subtransaction is active\". The point is\nto reduce the number of system states we have to worry about below.)\n\n2. In the PG_CATCH block, after collecting the error data do\n\tAbortOutOfAnyTransaction();\n\tStartTransactionCommand();\nwhich gets us into a good state with no active subtransactions.\n\nI'm not sure that those two are the best choices of xact.c\nentry points, but there's precedent for that in autovacuum.c\namong other places. (autovacuum seems to think that blocking\ninterrupts is a good idea too.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jan 2022 17:53:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 16.01.22 23:53, Tom Lane wrote:\n> I think a possible fix is:\n> \n> 1. Before entering the PG_TRY block, check for active subtransaction(s)\n> and immediately throw a Python error if there is one. (This corresponds\n> to the existing errors \"cannot commit while a subtransaction is active\"\n> and \"cannot roll back while a subtransaction is active\". The point is\n> to reduce the number of system states we have to worry about below.)\n> \n> 2. In the PG_CATCH block, after collecting the error data do\n> \tAbortOutOfAnyTransaction();\n> \tStartTransactionCommand();\n> which gets us into a good state with no active subtransactions.\n> \n> I'm not sure that those two are the best choices of xact.c\n> entry points, but there's precedent for that in autovacuum.c\n> among other places.\n\nAFAICT, AbortOutOfAnyTransaction() also aborts subtransactions, so why \ndo you suggest the separate handling of subtransactions?\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 15:21:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 16.01.22 23:53, Tom Lane wrote:\n>> I think a possible fix is:\n>> \n>> 1. Before entering the PG_TRY block, check for active subtransaction(s)\n>> and immediately throw a Python error if there is one. (This corresponds\n>> to the existing errors \"cannot commit while a subtransaction is active\"\n>> and \"cannot roll back while a subtransaction is active\". The point is\n>> to reduce the number of system states we have to worry about below.)\n\n> AFAICT, AbortOutOfAnyTransaction() also aborts subtransactions, so why \n> do you suggest the separate handling of subtransactions?\n\nWe don't want these operations to be able to cancel subtransactions,\ndo we? The existing errors certainly suggest not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jan 2022 10:54:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 25.01.22 16:54, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 16.01.22 23:53, Tom Lane wrote:\n>>> I think a possible fix is:\n>>>\n>>> 1. Before entering the PG_TRY block, check for active subtransaction(s)\n>>> and immediately throw a Python error if there is one. (This corresponds\n>>> to the existing errors \"cannot commit while a subtransaction is active\"\n>>> and \"cannot roll back while a subtransaction is active\". The point is\n>>> to reduce the number of system states we have to worry about below.)\n> \n>> AFAICT, AbortOutOfAnyTransaction() also aborts subtransactions, so why\n>> do you suggest the separate handling of subtransactions?\n> \n> We don't want these operations to be able to cancel subtransactions,\n> do we? The existing errors certainly suggest not.\n\nI've been struggling to make progress on this. First, throwing the \nPython exception suggested in #1 above would require a significant \namount of new code. (We have code to create an exception out of \nErrorData, but no code to make one up from nothing.) This would need \nfurther thought on how to arrange the code sensibly. Second, calling \nAbortOutOfAnyTransaction() appears to clean up so much stuff that even \nthe data needed for error handling in PL/Python is wiped out. The \nsymptoms are various crashes on pointers now valued 0x7f7f7f... You fix \none, you get the next one. So more analysis would be required there, too.\n\nOne way to way to address the first problem is to not handle the \"cannot \ncommit while a subtransaction is active\" and similar cases manually but \nlet _SPI_commit() throw the error and then check in PL/Python for the \nerror code ERRCODE_INVALID_TRANSACTION_TERMINATION. (The code in \n_SPI_commit() is there after all explicitly for PLs, so if we're not \nusing it, then we're doing it wrong.) And then only call \nAbortOutOfAnyTransaction() (or whatever) if it's a different code, which \nwould mean something in the actual committing went wrong.\n\nBut in any case, for either implementation it seems then you'd get \ndifferent error behavior from .commit and .rollback calls depending on \nthe specific error, which seems weird.\n\nAltogether, maybe the response to\n\n > The existing test cases apparently fail to trip over that\n > because Python just throws the exception again, but what if someone\n > writes a plpython function that catches the exception and then tries\n > to perform new SPI actions?\n\nperhaps should be, don't do that then?\n\n\n", "msg_date": "Tue, 1 Feb 2022 15:24:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I've been struggling to make progress on this. First, throwing the \n> Python exception suggested in #1 above would require a significant \n> amount of new code. (We have code to create an exception out of \n> ErrorData, but no code to make one up from nothing.) This would need \n> further thought on how to arrange the code sensibly. Second, calling \n> AbortOutOfAnyTransaction() appears to clean up so much stuff that even \n> the data needed for error handling in PL/Python is wiped out.\n\nYeah, it's messy.\n\n> One way to way to address the first problem is to not handle the \"cannot \n> commit while a subtransaction is active\" and similar cases manually but \n> let _SPI_commit() throw the error and then check in PL/Python for the \n> error code ERRCODE_INVALID_TRANSACTION_TERMINATION. (The code in \n> _SPI_commit() is there after all explicitly for PLs, so if we're not \n> using it, then we're doing it wrong.)\n\nTBH, I've thought from day one that _SPI_commit and friends are unusably\nsimplistic, precisely because they throw all this error-recovery\ncomplexity onto the caller, and provide no tools to handle it without\ndropping down to a lower level of abstraction.\n\n> But in any case, for either implementation it seems then you'd get \n> different error behavior from .commit and .rollback calls depending on \n> the specific error, which seems weird.\n\nWell, I think we *have to* do that. The INVALID_TRANSACTION_TERMINATION\nerrors mean that we're in some kind of atomic execution context that\nwe mustn't destroy. Other errors mean that the commit failed, and that\nhas to be cleaned up somehow.\n\n(Note: there is also an INVALID_TRANSACTION_TERMINATION error in\nHoldPinnedPortals, which is now seeming like a mistake to me; we should\nchange that to some other errcode, perhaps. There are no others\nbesides _SPI_commit/_SPI_rollback.)\n\n> Altogether, maybe the response to\n\n>>> The existing test cases apparently fail to trip over that\n>>> because Python just throws the exception again, but what if someone\n>>> writes a plpython function that catches the exception and then tries\n>>> to perform new SPI actions?\n\n> perhaps should be, don't do that then?\n\nThat seems far south of acceptable to me. I experimented with the\nattached script, which in HEAD just results in aborting the Python\nscript -- not good, but at least the general state of the session\nis okay. With your patch, I get\n\nINFO: sqlstate: 23503\nINFO: after exception\nWARNING: buffer refcount leak: [8760] (rel=base/16384/38401, blockNum=0, flags=0x93800000, refcount=1 1)\nWARNING: relcache reference leak: relation \"testfk\" not closed\nWARNING: relcache reference leak: relation \"testpk\" not closed\nWARNING: TupleDesc reference leak: TupleDesc 0x7f3a12e0de80 (38403,-1) still referenced\nWARNING: snapshot 0x1cba150 still active\nDO\n f1 \n----\n 0\n 1\n(2 rows)\n\nSo aside from all the internal problems, we've actually managed to commit\nan invalid database state. I do not find that OK.\n\nI think that maybe we could get somewhere by having _SPI_commit/rollback\nwork like this:\n\n1. Throw the INVALID_TRANSACTION_TERMINATION errors if appropriate.\nThe caller can catch those and proceed if desired, knowing that the\ntransaction system is in the same state it was.\n\n2. Use a PG_TRY block to do the commit or rollback. On error,\nroll back (knowing that we are not in a subtransaction) and then\nre-throw the error.\n\nIf the caller catches an error other than INVALID_TRANSACTION_TERMINATION,\nit can proceed, but it's still on the hook to do SPI_start_transaction\nbefore it attempts any new database access (just as it would be if\nthere had not been an error).\n\nWe could provide a simplified API in which SPI_start_transaction is\nautomatically included for either success or failure, so that the caller\ndoesn't have the delayed-cleanup responsibility. I'm not actually sure\nthat there is any situation where that couldn't be done, but putting it\ninto the existing functions would be a API break (... unless we turn\nSPI_start_transaction into a no-op, which might be OK?) Basically this'd\nbe making the behavior of SPI_commit_and_chain the norm, except you'd\nhave the option of reverting to default transaction settings instead\nof the previous ones.\n\nSo basically where we'd end up is that plpython would do about what\nyou show in your patch, but then there's additional work needed in\nspi.c so that we're not leaving the rest of the system in a bad state.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 07 Feb 2022 17:38:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "I wrote:\n> We could provide a simplified API in which SPI_start_transaction is\n> automatically included for either success or failure, so that the caller\n> doesn't have the delayed-cleanup responsibility. I'm not actually sure\n> that there is any situation where that couldn't be done, but putting it\n> into the existing functions would be a API break (... unless we turn\n> SPI_start_transaction into a no-op, which might be OK?) Basically this'd\n> be making the behavior of SPI_commit_and_chain the norm, except you'd\n> have the option of reverting to default transaction settings instead\n> of the previous ones.\n> So basically where we'd end up is that plpython would do about what\n> you show in your patch, but then there's additional work needed in\n> spi.c so that we're not leaving the rest of the system in a bad state.\n\nHere's a draft patch that works that way. I haven't tested it against\nPython 3.11, but it handles the case I showed upthread (now incorporated\nas a regression test), as well as Alexander's repeat-till-stack-overflow\nproblem. The one thing I found that I wasn't expecting is that\nAtEOXact_SPI is quite a few bricks shy of a load: it doesn't handle\ncases where some atomic contexts are atop an internal_xact one. That\nhappens in the new test case because the error is thrown from RI\ntriggers that themselves use SPI.\n\nThis is draft mainly in that\n\n* I didn't touch spi.sgml yet.\n\n* It might be a good idea to add parallel test cases for the other PLs.\n\n* I'm not satisfied with using static storage for\nSaveTransactionCharacteristics/RestoreTransactionCharacteristics.\nIt'd be better for them to use a struct allocated locally in the caller.\nThat would be a simple enough change, but also an API break; on the\nother hand, I really doubt anything outside core code is calling those.\nAnyone object to changing them?\n\nI'm not sure how to proceed with this. It's kind of a large change\nto be putting into back branches, but our hands will be forced once\nPython 3.11 is out. Maybe put it in HEAD now, and plan to back-patch\nin a few months?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 23 Feb 2022 14:31:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "I wrote:\n> * It might be a good idea to add parallel test cases for the other PLs.\n\nAs I suspected, plperl and pltcl show exactly the same problems\nwhen trapping a failure at commit, reinforcing my opinion that this\nis a SPI bug that needs to be fixed in SPI. (plpgsql is not subject\nto this problem, because its only mechanism for trapping errors is\na BEGIN block, ie a subtransaction, so it won't get to the interesting\npart.) They do have logic to catch the thrown error, though, so no\nadditional fix is needed in either once we fix the core code.\n\nv2 attached adds those test cases.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 24 Feb 2022 13:17:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "I wrote:\n> * I'm not satisfied with using static storage for\n> SaveTransactionCharacteristics/RestoreTransactionCharacteristics.\n\nLooking closer at this, I was not too amused to discover that of the three\nexisting SaveTransactionCharacteristics calls, two already conflict with\neach other: _SPI_commit calls SaveTransactionCharacteristics and then\ncalls CommitTransactionCommand, which again calls\nSaveTransactionCharacteristics, overwriting the static storage.\nI *think* there's no live bug there, because the state probably wouldn't\nhave changed in between; but considering we run arbitrary user-defined\ncode between those two points I sure wouldn't bet on it.\n\n0001 attached is the same code patch as before, but now with spi.sgml\nupdates; 0002 changes the API for Save/RestoreTransactionCharacteristics.\nIf anyone's really worried about backpatching 0002, we could perhaps\nget away with doing that only in HEAD.\n\nI found in 0002 that I had to make CommitTransactionCommand call\nSaveTransactionCharacteristics unconditionally, else I got warnings\nabout possibly-uninitialized local variables. It's cheap enough\nthat I'm not too fussed about that. I don't get any warnings from\nthe similar code in spi.c, probably because those functions can't\nbe inlined there.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 24 Feb 2022 14:34:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Is it time yet to back-patch 2e517818f (\"Fix SPI's handling of errors\nduring transaction commit\")? We know we're going to have to do it\nbefore Python 3.11 ships, and it's been stable in HEAD for 3.5 months\nnow. Also, the Fedora guys absorbed the patch a couple weeks ago [1]\nbecause they're already using 3.11 in rawhide, and I've not heard\ncomplaints from that direction.\n\nMy inclination at this point is to not back-patch the second change\n12d768e70 (\"Don't use static storage for SaveTransactionCharacteristics\").\nIt's not clear that the benefit would be worth even a small risk of\nsomebody being unhappy about the API break.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BHKMWMY_e2otmTJDjKUAvC8Urh4rzSWOPZ%3DfszU5brkBP97ng%40mail.gmail.com\n\n\n", "msg_date": "Tue, 21 Jun 2022 12:33:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 6/21/22 18:33, Tom Lane wrote:\n> My inclination at this point is to not back-patch the second change\n> 12d768e70 (\"Don't use static storage for SaveTransactionCharacteristics\").\n> It's not clear that the benefit would be worth even a small risk of\n> somebody being unhappy about the API break.\n\nActually, the backport of 2e517818f (\"Fix SPI's handling of errors\") \nalready broke the API for code using SPICleanup, as that function had \nbeen removed. Granted, it's not documented, but still exported.\n\nI propose to re-introduce a no-op placeholder similar to what we have \nfor SPI_start_transaction, somewhat like the attached patch.\n\nRegards\n\nMarkus", "msg_date": "Thu, 23 Jun 2022 09:41:00 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Markus Wanner <markus.wanner@enterprisedb.com> writes:\n> Actually, the backport of 2e517818f (\"Fix SPI's handling of errors\") \n> already broke the API for code using SPICleanup, as that function had \n> been removed. Granted, it's not documented, but still exported.\n\nUnder what circumstances would it be OK for outside code to call\nSPICleanup?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jun 2022 09:34:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 6/23/22 15:34, Tom Lane wrote:\n> Under what circumstances would it be OK for outside code to call\n> SPICleanup?\n\nFor the same reasons previous Postgres versions called SPICleanup: from \na sigsetjmp handler that duplicates most of what Postgres does in such a \nsituation.\n\nHowever, I think that's the wrong question to ask for a stable branch. \nPostgres did export this function in previous versions. Removing it \naltogether constitutes an API change and makes extensions that link to \nit fail to even load, which is a bad way to fail after a patch version \nupgrade. Even if its original use was not sound in the first place.\n\nOfc my proposed patch is not meant for master, only for stable branches.\n\nBest Regards\n\nMarkus\n\n\n", "msg_date": "Thu, 23 Jun 2022 21:57:07 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "Markus Wanner <markus.wanner@enterprisedb.com> writes:\n> On 6/23/22 15:34, Tom Lane wrote:\n>> Under what circumstances would it be OK for outside code to call\n>> SPICleanup?\n\n> For the same reasons previous Postgres versions called SPICleanup: from \n> a sigsetjmp handler that duplicates most of what Postgres does in such a \n> situation.\n\nDoes such code exist? I don't see any other calls in Debian code search,\nand I find it hard to believe that anyone would think such a thing is\nmaintainable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:54:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 6/24/22 00:54, Tom Lane wrote:\n> Does such code exist? I don't see any other calls in Debian code search,\n> and I find it hard to believe that anyone would think such a thing is\n> maintainable.\n\nSuch a thing does exist within PGLogical and BDR, yes.\n\nThanks for your concern about maintainability. So far, that part was not \nposing any trouble. Looking at e.g. postgres.c, the sigsetjmp handler \nthere didn't change all that much in recent years. Much of the code \nthere is from around 2004 written by you.\n\nHowever, that shouldn't be your concern at all. Postgres refusing to \nstart after a minor upgrade probably should, especially when it's due to \nan API change in a stable branch.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Fri, 24 Jun 2022 14:04:50 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fix crash with Python 3.11" }, { "msg_contents": "On 23.06.22 09:41, Markus Wanner wrote:\n> \n> On 6/21/22 18:33, Tom Lane wrote:\n>> My inclination at this point is to not back-patch the second change\n>> 12d768e70 (\"Don't use static storage for \n>> SaveTransactionCharacteristics\").\n>> It's not clear that the benefit would be worth even a small risk of\n>> somebody being unhappy about the API break.\n> \n> Actually, the backport of 2e517818f (\"Fix SPI's handling of errors\") \n> already broke the API for code using SPICleanup, as that function had \n> been removed. Granted, it's not documented, but still exported.\n> \n> I propose to re-introduce a no-op placeholder similar to what we have \n> for SPI_start_transaction, somewhat like the attached patch.\n\nI have applied your patch to branches 11 through 14.\n\n\n", "msg_date": "Mon, 18 Jul 2022 21:09:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: fix crash with Python 3.11" } ]
[ { "msg_contents": "Hi,\n\nWhen reviewing some logical replication patches. I noticed that in\nfunction get_rel_sync_entry() we always invoke get_rel_relispartition() \nand get_rel_relkind() at the beginning which could cause unnecessary\ncache access.\n\n---\nget_rel_sync_entry(PGOutputData *data, Oid relid)\n{\n\tRelationSyncEntry *entry;\n\tbool\t\tam_partition = get_rel_relispartition(relid);\n\tchar\t\trelkind = get_rel_relkind(relid);\n---\n\nThe extra cost could sometimes be noticeable because get_rel_sync_entry is a\nhot function which is executed for each change. And the 'am_partition' and\n'relkind' are necessary only when we need to rebuild the RelationSyncEntry.\n\nHere is the perf result for the case when inserted large amounts of data into a\nun-published table in which case the cost is noticeable.\n\n--12.83%--pgoutput_change\n |--11.84%--get_rel_sync_entry\n\t |--4.76%--get_rel_relispartition\n\t\t|--4.70%--get_rel_relkind\n\nSo, I think it would be better if we do the initialization only when\nRelationSyncEntry in invalid.\n\nAttach a small patch which delay the initialization.\n\nThoughts ?\n\nBest regards,\nHou zj", "msg_date": "Wed, 22 Dec 2021 13:11:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "At Wed, 22 Dec 2021 13:11:38 +0000, \"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> wrote in \n> Hi,\n> \n> When reviewing some logical replication patches. I noticed that in\n> function get_rel_sync_entry() we always invoke get_rel_relispartition() \n> and get_rel_relkind() at the beginning which could cause unnecessary\n> cache access.\n> \n> ---\n> get_rel_sync_entry(PGOutputData *data, Oid relid)\n> {\n> \tRelationSyncEntry *entry;\n> \tbool\t\tam_partition = get_rel_relispartition(relid);\n> \tchar\t\trelkind = get_rel_relkind(relid);\n> ---\n> \n> The extra cost could sometimes be noticeable because get_rel_sync_entry is a\n> hot function which is executed for each change. And the 'am_partition' and\n> 'relkind' are necessary only when we need to rebuild the RelationSyncEntry.\n> \n> Here is the perf result for the case when inserted large amounts of data into a\n> un-published table in which case the cost is noticeable.\n> \n> --12.83%--pgoutput_change\n> |--11.84%--get_rel_sync_entry\n> \t |--4.76%--get_rel_relispartition\n> \t\t|--4.70%--get_rel_relkind\n> \n> So, I think it would be better if we do the initialization only when\n> RelationSyncEntry in invalid.\n> \n> Attach a small patch which delay the initialization.\n> \n> Thoughts ?\n\nA simple benchmarking that replicates pgbench workload showed me that\nthe function doesn't enter the path to use the am_partition and\nrelkind in almost all (99.999..%) cases and I don't think it is a\nspecial case. Thus I think we can expect that we gain about 10%\nwithout any possibility of loss.\n\nAddition to that, it is simply a good practice to keep variable scopes\nnarrow.\n\nSo +1 for this change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Dec 2021 10:19:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Wed, Dec 22, 2021, at 10:11 AM, houzj.fnst@fujitsu.com wrote:\n> When reviewing some logical replication patches. I noticed that in\n> function get_rel_sync_entry() we always invoke get_rel_relispartition() \n> and get_rel_relkind() at the beginning which could cause unnecessary\n> cache access.\n> \n> ---\n> get_rel_sync_entry(PGOutputData *data, Oid relid)\n> {\n> RelationSyncEntry *entry;\n> bool am_partition = get_rel_relispartition(relid);\n> char relkind = get_rel_relkind(relid);\n> ---\n> \n> The extra cost could sometimes be noticeable because get_rel_sync_entry is a\n> hot function which is executed for each change. And the 'am_partition' and\n> 'relkind' are necessary only when we need to rebuild the RelationSyncEntry.\n> \n> Here is the perf result for the case when inserted large amounts of data into a\n> un-published table in which case the cost is noticeable.\n> \n> --12.83%--pgoutput_change\n> |--11.84%--get_rel_sync_entry\n> |--4.76%--get_rel_relispartition\n> |--4.70%--get_rel_relkind\nGood catch. WFM. Deferring variable initialization close to its first use is\ngood practice.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Dec 22, 2021, at 10:11 AM, houzj.fnst@fujitsu.com wrote:When reviewing some logical replication patches. I noticed that infunction get_rel_sync_entry() we always invoke get_rel_relispartition() and get_rel_relkind() at the beginning which could cause unnecessarycache access.---get_rel_sync_entry(PGOutputData *data, Oid relid){RelationSyncEntry *entry;bool\t\tam_partition = get_rel_relispartition(relid);char\t\trelkind = get_rel_relkind(relid);---The extra cost could sometimes be noticeable because get_rel_sync_entry is ahot function which is executed for each change. And the 'am_partition' and'relkind' are necessary only when we need to rebuild the RelationSyncEntry.Here is the perf result for the case when inserted large amounts of data into aun-published table in which case the cost is noticeable.--12.83%--pgoutput_change    |--11.84%--get_rel_sync_entry    |--4.76%--get_rel_relispartition|--4.70%--get_rel_relkindGood catch. WFM. Deferring variable initialization close to its first use isgood practice.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 23 Dec 2021 12:54:41 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Thu, Dec 23, 2021 at 12:54:41PM -0300, Euler Taveira wrote:\n> On Wed, Dec 22, 2021, at 10:11 AM, houzj.fnst@fujitsu.com wrote:\n>> The extra cost could sometimes be noticeable because get_rel_sync_entry is a\n>> hot function which is executed for each change. And the 'am_partition' and\n>> 'relkind' are necessary only when we need to rebuild the RelationSyncEntry.\n>> \n>> Here is the perf result for the case when inserted large amounts of data into a\n>> un-published table in which case the cost is noticeable.\n>> \n>> --12.83%--pgoutput_change\n>> |--11.84%--get_rel_sync_entry\n>> |--4.76%--get_rel_relispartition\n>> |--4.70%--get_rel_relkind\n\nHow does the perf balance change once you apply the patch? Do we have\nanything else that stands out? Getting rid of this bottleneck is fine\nby itself, but I am wondering if there are more things to worry about\nor not.\n\n> Good catch. WFM. Deferring variable initialization close to its first use is\n> good practice.\n\nYeah, it is usually a good practice to have the declaration within\nthe code block that uses it rather than piling everything at the\nbeginning of the function. Being able to see that in profiles is\nannoying, and the change is simple, so I'd like to backpatch it.\n\nThis is a period of vacations for a lot of people, so I'll wait until\nthe beginning-ish of January before doing anything.\n--\nMichael", "msg_date": "Fri, 24 Dec 2021 09:12:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Friday, December 24, 2021 8:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Dec 23, 2021 at 12:54:41PM -0300, Euler Taveira wrote:\n> > On Wed, Dec 22, 2021, at 10:11 AM, houzj.fnst@fujitsu.com wrote:\n> >> The extra cost could sometimes be noticeable because get_rel_sync_entry is\n> a\n> >> hot function which is executed for each change. And the 'am_partition' and\n> >> 'relkind' are necessary only when we need to rebuild the RelationSyncEntry.\n> >>\n> >> Here is the perf result for the case when inserted large amounts of data into\n> a\n> >> un-published table in which case the cost is noticeable.\n> >>\n> >> --12.83%--pgoutput_change\n> >> |--11.84%--get_rel_sync_entry\n> >> |--4.76%--get_rel_relispartition\n> >> |--4.70%--get_rel_relkind\n> \n> How does the perf balance change once you apply the patch? Do we have\n> anything else that stands out? Getting rid of this bottleneck is fine\n> by itself, but I am wondering if there are more things to worry about\n> or not.\n\nThanks for the response.\nHere is the perf result of pgoutput_change after applying the patch.\nI didn't notice something else that stand out.\n\n |--2.99%--pgoutput_change\n |--1.80%--get_rel_sync_entry\n |--1.56%--hash_search\n\nAlso attach complete profiles.\n\n> > Good catch. WFM. Deferring variable initialization close to its first use is\n> > good practice.\n> \n> Yeah, it is usually a good practice to have the declaration within\n> the code block that uses it rather than piling everything at the\n> beginning of the function. Being able to see that in profiles is\n> annoying, and the change is simple, so I'd like to backpatch it.\n\n+1\n\n> This is a period of vacations for a lot of people, so I'll wait until\n> the beginning-ish of January before doing anything.\n\nThanks, added it to CF.\nhttps://commitfest.postgresql.org/36/3471/\n\nBest regards,\nHou zj", "msg_date": "Fri, 24 Dec 2021 13:27:26 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Fri, Dec 24, 2021 at 01:27:26PM +0000, houzj.fnst@fujitsu.com wrote:\n> Here is the perf result of pgoutput_change after applying the patch.\n> I didn't notice something else that stand out.\n> \n> |--2.99%--pgoutput_change\n> |--1.80%--get_rel_sync_entry\n> |--1.56%--hash_search\n> \n> Also attach complete profiles.\n\nThanks. I have also done my own set of measurements, and the\ndifference is noticeable in the profiles I looked at. So, applied\ndown to 13.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 10:30:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Wednesday, January 5, 2022 9:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Dec 24, 2021 at 01:27:26PM +0000, houzj.fnst@fujitsu.com wrote:\n> > Here is the perf result of pgoutput_change after applying the patch.\n> > I didn't notice something else that stand out.\n> >\n> > |--2.99%--pgoutput_change\n> > |--1.80%--get_rel_sync_entry\n> > |--1.56%--hash_search\n> >\n> > Also attach complete profiles.\n> \n> Thanks. I have also done my own set of measurements, and the difference is\n> noticeable in the profiles I looked at. So, applied down to 13.\nThanks for pushing!\n\nBest regards,\nHou zj\n\n\n", "msg_date": "Wed, 5 Jan 2022 05:51:44 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Delay the variable initialization in get_rel_sync_entry" }, { "msg_contents": "On Wed, Jan 5, 2022 at 10:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Dec 24, 2021 at 01:27:26PM +0000, houzj.fnst@fujitsu.com wrote:\n> > Here is the perf result of pgoutput_change after applying the patch.\n> > I didn't notice something else that stand out.\n> >\n> > |--2.99%--pgoutput_change\n> > |--1.80%--get_rel_sync_entry\n> > |--1.56%--hash_search\n> >\n> > Also attach complete profiles.\n>\n> Thanks. I have also done my own set of measurements, and the\n> difference is noticeable in the profiles I looked at. So, applied\n> down to 13.\n\nThanks. I agree the variables were being defined in the wrong place\nbefore this patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 14:16:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Delay the variable initialization in get_rel_sync_entry" } ]
[ { "msg_contents": "Hello, hackers!\n\nNow, all of auto_explain output is directed to postgres's log and it is \nnot comfortably to extract on big highloaded systems.\nMy proposal is add option to auto_explain to log data to separate \nlogfile. In my patch I plan to (re)open file every time associated guc \nvariable is changing, included extension boot. In case of error or \nunexpected situation file closes and output directed to common \npostgres's log.\nWhat do you think about this idea? Also i would be grateful about any \nideas how conveniently and suitable rotate that new log", "msg_date": "Wed, 22 Dec 2021 16:54:37 +0300", "msg_from": "Timofey <tim-shlyap@yandex-team.ru>", "msg_from_op": true, "msg_subject": "[Proposal][WIP] Add option to log auto_explain output to separate\n logfile" }, { "msg_contents": "Hi\n\nst 22. 12. 2021 v 14:54 odesílatel Timofey <tim-shlyap@yandex-team.ru>\nnapsal:\n\n> Hello, hackers!\n>\n> Now, all of auto_explain output is directed to postgres's log and it is\n> not comfortably to extract on big highloaded systems.\n> My proposal is add option to auto_explain to log data to separate\n> logfile. In my patch I plan to (re)open file every time associated guc\n> variable is changing, included extension boot. In case of error or\n> unexpected situation file closes and output directed to common\n> postgres's log.\n> What do you think about this idea? Also i would be grateful about any\n> ideas how conveniently and suitable rotate that new log\n>\n\nIt is good idea, but I think so it needs to be implemented more generally.\nThere should be a) possibility to add some extra informations, that can\nallows redirect in others tools like rsyslog, b) possibility to use more\nlogfiles than one - for more extensions, or for slow query log, for log\nsensitive or audit data, ... General design solves the problem with log\nrotation.\n\nRegards\n\nPavel\n\nHist 22. 12. 2021 v 14:54 odesílatel Timofey <tim-shlyap@yandex-team.ru> napsal:Hello, hackers!\n\nNow, all of auto_explain output is directed to postgres's log and it is \nnot comfortably to extract on big highloaded systems.\nMy proposal is add option to auto_explain to log data to separate \nlogfile. In my patch I plan to (re)open file every time associated guc \nvariable is changing, included extension boot. In case of error or \nunexpected situation file closes and output directed to common \npostgres's log.\nWhat do you think about this idea? Also i would be grateful about any \nideas how conveniently and suitable rotate that new logIt is good idea, but I think so it needs to be implemented more generally. There should be a) possibility to add some extra informations, that can allows redirect in others tools like rsyslog, b) possibility to use more logfiles than one - for more extensions, or for slow query log, for log sensitive or audit data, ... General design solves the problem with log rotation.RegardsPavel", "msg_date": "Wed, 22 Dec 2021 15:37:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal][WIP] Add option to log auto_explain output to separate\n logfile" } ]
[ { "msg_contents": "... ok, I see that the answer is yes, according to the commit comment\nfor eccfef8:\n\n Currently, ICU-provided collations can only be explicitly named\n collations. The global database locales are still always libc-provided.\n\nI got there the long way, by first wondering how to tell whether a\ndatcollate or datctype string was intended for libc or ICU, and then\nreading pg_perm_setlocale, and then combing through the docs at\nCREATE DATABASE and createdb and initdb and Collation Support and\npg_database and the release notes for 10, sure that I would find\nthe answer staring at me in one of those places once I knew I was asking.\n\nNext question: the \"currently\" in that comment suggests that could change,\nbut is there any present intention to change it, or is this likely to just\nbe the way it is for the foreseeable future?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 22 Dec 2021 16:46:40 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Are datcollate/datctype always libc even under --with-icu ?" }, { "msg_contents": "\tChapman Flack wrote:\n\n> Next question: the \"currently\" in that comment suggests that could change,\n> but is there any present intention to change it, or is this likely to just\n> be the way it is for the foreseeable future?\n\nSome related patches and discussions:\n\n* ICU as default collation provider\nhttps://commitfest.postgresql.org/21/1543/\n\n* ICU for global collation\nhttps://commitfest.postgresql.org/25/2256/\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 23 Dec 2021 16:11:40 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Are datcollate/datctype always libc even under --with-icu ?" } ]
[ { "msg_contents": "Hi Hackers,\n\nI am considering implementing RPO (recovery point objective) enforcement\nfeature for Postgres where the WAL writes on the primary are stalled when\nthe WAL distance between the primary and standby exceeds the configured\n(replica_lag_in_bytes) threshold. This feature is useful particularly in\nthe disaster recovery setups where primary and standby are in different\nregions and synchronous replication can't be set up for latency and\nperformance reasons yet requires some level of RPO enforcement.\n\nThe idea here is to calculate the lag between the primary and the standby\n(Async?) server during XLogInsert and block the caller until the lag is\nless than the threshold value. We can calculate the max lag by iterating\nover ReplicationSlotCtl->replication_slots. If this is not something we\ndon't want to do in the core, at least adding a hook for XlogInsert is of\ngreat value.\n\nA few other scenarios I can think of with the hook are:\n\n 1. Enforcing RPO as described above\n 2. Enforcing rate limit and slow throttling when sync standby is falling\n behind (could be flush lag or replay lag)\n 3. Transactional log rate governance - useful for cloud providers to\n provide SKU sizes based on allowed WAL writes.\n\nThoughts?\n\nThanks,\nSatya\n\nHi Hackers,I am considering implementing RPO (recovery point objective) enforcement feature for Postgres where the WAL writes on the primary are stalled when the WAL distance between the primary and standby exceeds the configured (replica_lag_in_bytes) threshold. This feature is useful particularly in the disaster recovery setups where primary and standby are in different regions and synchronous replication can't be set up for latency and performance reasons yet requires some level of RPO enforcement.The idea here is to calculate the lag between the primary and the standby (Async?) server during XLogInsert and block the caller until the lag is less than the threshold value. We can calculate the max lag by iterating over ReplicationSlotCtl->replication_slots. If this is not something we don't want to do in the core, at least adding a hook for XlogInsert is of great value.A few other scenarios I can think of with the hook are:Enforcing RPO as described aboveEnforcing rate limit and slow throttling when sync standby is falling behind (could be flush lag or replay lag)Transactional log rate governance - useful for cloud providers to provide SKU sizes based on allowed WAL writes.Thoughts?Thanks,Satya", "msg_date": "Wed, 22 Dec 2021 16:23:27 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Throttling WAL inserts when the standby falls behind more than the\n configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 23, 2021 at 5:53 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> I am considering implementing RPO (recovery point objective) enforcement feature for Postgres where the WAL writes on the primary are stalled when the WAL distance between the primary and standby exceeds the configured (replica_lag_in_bytes) threshold. This feature is useful particularly in the disaster recovery setups where primary and standby are in different regions and synchronous replication can't be set up for latency and performance reasons yet requires some level of RPO enforcement.\n\n+1 for the idea in general. However, blocking writes on primary seems\nan extremely radical idea. The replicas can fall behind transiently at\ntimes and blocking writes on the primary may stop applications failing\nfor these transient times. This is not a problem if the applications\nhave retry logic for the writes. How about blocking writes on primary\nif the replicas fall behind the primary for a certain period of time?\n\n> The idea here is to calculate the lag between the primary and the standby (Async?) server during XLogInsert and block the caller until the lag is less than the threshold value. We can calculate the max lag by iterating over ReplicationSlotCtl->replication_slots.\n\nThe \"falling behind\" can also be quantified by the number of\nwrite-transactions on the primary. I think it's good to have the users\nchoose what the \"falling behind\" means for them. We can have something\nlike the \"recovery_target\" param with different options name, xid,\ntime, lsn.\n\n> If this is not something we don't want to do in the core, at least adding a hook for XlogInsert is of great value.\n\nIMHO, this feature may not be needed by everyone, the hook-way seems\nreasonable so that the postgres vendors can provide different\nimplementations (for instance they can write an extension that\nimplements this hook which can block writes on primary, write some log\nmessages, inform some service layer of the replicas falling behind the\nprimary etc.). If we were to have the hook in XLogInsert which gets\ncalled so frequently or XLogInsert is a hot-path, the hook really\nshould do as little work as possible, otherwise the write operations\nlatency may increase.\n\n> A few other scenarios I can think of with the hook are:\n>\n> Enforcing RPO as described above\n> Enforcing rate limit and slow throttling when sync standby is falling behind (could be flush lag or replay lag)\n> Transactional log rate governance - useful for cloud providers to provide SKU sizes based on allowed WAL writes.\n>\n> Thoughts?\n\nThe hook can help to achieve the above objectives but where to place\nit and what parameters it should take as input (or what info it should\nemit out of the server via the hook) are important too.\n\nHaving said all, the RPO feature can also be implemented outside of\nthe postgres, a simple implementation could be - get the primary\ncurrent wal lsn using pg_current_wal_lsn and all the replicas\nrestart_lsn using pg_replication_slot, if they differ by certain\namount, then issue ALTER SYSTEM SET READ ONLY command [1] on the\nprimary, this requires the connections to the server and proper access\nrights. This feature can also be implemented as an extension (without\nthe hook) which doesn't require any connections to the server yet can\naccess the required info primary current_wal_lsn, restart_lsn of the\nreplication slots etc, but the RPO enforcement may not be immediate as\nthe server doesn't have any hooks in XLogInsert or some other area.\n\n[1] - https://www.postgresql.org/message-id/CAAJ_b967uKBiW6gbHr5aPzweURYjEGv333FHVHxvJmMhanwHXA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 23 Dec 2021 16:17:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 23, 2021 at 5:53 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> I am considering implementing RPO (recovery point objective) enforcement feature for Postgres where the WAL writes on the primary are stalled when the WAL distance between the primary and standby exceeds the configured (replica_lag_in_bytes) threshold. This feature is useful particularly in the disaster recovery setups where primary and standby are in different regions and synchronous replication can't be set up for latency and performance reasons yet requires some level of RPO enforcement.\n\nLimiting transaction rate when the standby fails behind is a good feature ...\n\n>\n> The idea here is to calculate the lag between the primary and the standby (Async?) server during XLogInsert and block the caller until the lag is less than the threshold value. We can calculate the max lag by iterating over ReplicationSlotCtl->replication_slots. If this is not something we don't want to do in the core, at least adding a hook for XlogInsert is of great value.\n\nbut doing it in XLogInsert does not seem to be a good idea. It's a\ncommon point for all kinds of logging including VACUUM. We could\naccidently stall a critical VACUUM operation because of that.\n\nAs Bharath described, it better be handled at the application level monitoring.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 23 Dec 2021 18:48:45 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 23, 2021 at 5:18 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Thu, Dec 23, 2021 at 5:53 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > Hi Hackers,\n> >\n> > I am considering implementing RPO (recovery point objective) enforcement\n> feature for Postgres where the WAL writes on the primary are stalled when\n> the WAL distance between the primary and standby exceeds the configured\n> (replica_lag_in_bytes) threshold. This feature is useful particularly in\n> the disaster recovery setups where primary and standby are in different\n> regions and synchronous replication can't be set up for latency and\n> performance reasons yet requires some level of RPO enforcement.\n>\n> Limiting transaction rate when the standby fails behind is a good feature\n> ...\n>\n> >\n> > The idea here is to calculate the lag between the primary and the\n> standby (Async?) server during XLogInsert and block the caller until the\n> lag is less than the threshold value. We can calculate the max lag by\n> iterating over ReplicationSlotCtl->replication_slots. If this is not\n> something we don't want to do in the core, at least adding a hook for\n> XlogInsert is of great value.\n>\n> but doing it in XLogInsert does not seem to be a good idea.\n\n\nXLogInsert isn't the best place to throttle/govern in a simple and fair\nway, particularly the long-running transactions on the server?\n\n\n> It's a\n> common point for all kinds of logging including VACUUM. We could\n> accidently stall a critical VACUUM operation because of that.\n>\n\nAgreed, but again this is a policy decision that DBA can relax/enforce. I\nexpect RPO is in the range of a few 100MBs to GBs and on a healthy system\ntypically lag never comes close to this value. The Hook implementation can\ntake care of nitty-gritty details on the policy enforcement based on the\nneeds, for example, not throttling some backend processes like vacuum,\ncheckpointer; throttling based on the roles, for example not to throttle\nsuperuser connections; and throttling based on replay lag, write lag,\ncheckpoint taking longer, closer to disk full. Each of these can be easily\ntranslated into GUCs. Depending on the direction of the thread on the hook\nvs a feature in the Core, I can add more implementation details.\n\n\n\n> As Bharath described, it better be handled at the application level\n> monitoring.\n>\n\nBoth RPO based WAL throttling and application level monitoring can co-exist\nas each one has its own merits and challenges. Each application developer\nhas to implement their own throttling logic and often times it is hard to\nget it right.\n\n\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nOn Thu, Dec 23, 2021 at 5:18 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Thu, Dec 23, 2021 at 5:53 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> I am considering implementing RPO (recovery point objective) enforcement feature for Postgres where the WAL writes on the primary are stalled when the WAL distance between the primary and standby exceeds the configured (replica_lag_in_bytes) threshold. This feature is useful particularly in the disaster recovery setups where primary and standby are in different regions and synchronous replication can't be set up for latency and performance reasons yet requires some level of RPO enforcement.\n\nLimiting transaction rate when the standby fails behind is a good feature ...\n\n>\n> The idea here is to calculate the lag between the primary and the standby (Async?) server during XLogInsert and block the caller until the lag is less than the threshold value. We can calculate the max lag by iterating over ReplicationSlotCtl->replication_slots. If this is not something we don't want to do in the core, at least adding a hook for XlogInsert is of great value.\n\nbut doing it in XLogInsert does not seem to be a good idea. XLogInsert isn't the best place to throttle/govern in a simple and fair way, particularly the long-running transactions on the server? It's a\ncommon point for all kinds of logging including VACUUM. We could\naccidently stall a critical VACUUM operation because of that.Agreed, but again this is a policy decision that DBA can relax/enforce. I expect RPO is in the range of a few 100MBs to GBs and on a healthy system typically lag never comes close to this value. The Hook implementation can take care of nitty-gritty details on the policy enforcement based on the needs, for example, not throttling some backend processes like vacuum, checkpointer; throttling based on the roles, for example not to throttle superuser connections; and throttling based on replay lag, write lag, checkpoint taking longer, closer to disk full. Each of these can be easily translated into GUCs. Depending on the direction of the thread on the hook vs a feature in the Core, I can add more implementation details. \n\nAs Bharath described, it better be handled at the application level monitoring.Both RPO based WAL throttling and application level monitoring can co-exist as each one has its own merits and challenges. Each application developer has to implement their own throttling logic and often times it is hard to get it right.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 23 Dec 2021 09:07:48 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Please find the attached draft patch.\n\nOn Thu, Dec 23, 2021 at 2:47 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Dec 23, 2021 at 5:53 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > Hi Hackers,\n> >\n> > I am considering implementing RPO (recovery point objective) enforcement\n> feature for Postgres where the WAL writes on the primary are stalled when\n> the WAL distance between the primary and standby exceeds the configured\n> (replica_lag_in_bytes) threshold. This feature is useful particularly in\n> the disaster recovery setups where primary and standby are in different\n> regions and synchronous replication can't be set up for latency and\n> performance reasons yet requires some level of RPO enforcement.\n>\n> +1 for the idea in general. However, blocking writes on primary seems\n> an extremely radical idea. The replicas can fall behind transiently at\n> times and blocking writes on the primary may stop applications failing\n> for these transient times. This is not a problem if the applications\n> have retry logic for the writes. How about blocking writes on primary\n> if the replicas fall behind the primary for a certain period of time?\n>\n\nMy proposal is to block the caller from writing until the lag situation is\nimproved. Don't want to throw any errors and fail the tranaction. I think\nwe are aligned?\n\n\n>\n> > The idea here is to calculate the lag between the primary and the\n> standby (Async?) server during XLogInsert and block the caller until the\n> lag is less than the threshold value. We can calculate the max lag by\n> iterating over ReplicationSlotCtl->replication_slots.\n>\n> The \"falling behind\" can also be quantified by the number of\n> write-transactions on the primary. I think it's good to have the users\n> choose what the \"falling behind\" means for them. We can have something\n> like the \"recovery_target\" param with different options name, xid,\n> time, lsn.\n>\n\nThe transactions can be of arbitrary size and length and these options may\nnot provide the desired results. Time is a worthy option to add.\n\n\n>\n> > If this is not something we don't want to do in the core, at least\n> adding a hook for XlogInsert is of great value.\n>\n> IMHO, this feature may not be needed by everyone, the hook-way seems\n> reasonable so that the postgres vendors can provide different\n> implementations (for instance they can write an extension that\n> implements this hook which can block writes on primary, write some log\n> messages, inform some service layer of the replicas falling behind the\n> primary etc.). If we were to have the hook in XLogInsert which gets\n> called so frequently or XLogInsert is a hot-path, the hook really\n> should do as little work as possible, otherwise the write operations\n> latency may increase.\n>\n\nA Hook is a good start. If there is enough interest then an extension can\nbe added to the contrib module.\n\n\n> > A few other scenarios I can think of with the hook are:\n> >\n> > Enforcing RPO as described above\n> > Enforcing rate limit and slow throttling when sync standby is falling\n> behind (could be flush lag or replay lag)\n> > Transactional log rate governance - useful for cloud providers to\n> provide SKU sizes based on allowed WAL writes.\n> >\n> > Thoughts?\n>\n> The hook can help to achieve the above objectives but where to place\n> it and what parameters it should take as input (or what info it should\n> emit out of the server via the hook) are important too.\n>\n\nXLogInsert in my opinion is the best place to call it and the hook can be\nsomething like this \"void xlog_insert_hook(NULL)\" as all the throttling\nlogic required is the current flush position which can be obtained\nfrom GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n\n\n>\n> Having said all, the RPO feature can also be implemented outside of\n> the postgres, a simple implementation could be - get the primary\n> current wal lsn using pg_current_wal_lsn and all the replicas\n> restart_lsn using pg_replication_slot, if they differ by certain\n> amount, then issue ALTER SYSTEM SET READ ONLY command [1] on the\n> primary, this requires the connections to the server and proper access\n> rights. This feature can also be implemented as an extension (without\n> the hook) which doesn't require any connections to the server yet can\n> access the required info primary current_wal_lsn, restart_lsn of the\n> replication slots etc, but the RPO enforcement may not be immediate as\n> the server doesn't have any hooks in XLogInsert or some other area.\n>\n\nREAD ONLY is a decent choice but can fail the writes or not take\ninto effect until the end of the transaction?\n\n\n> [1] -\n> https://www.postgresql.org/message-id/CAAJ_b967uKBiW6gbHr5aPzweURYjEGv333FHVHxvJmMhanwHXA%40mail.gmail.com\n>\n> Regards,\n> Bharath Rupireddy.\n>", "msg_date": "Thu, 23 Dec 2021 13:57:12 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>>\n> XLogInsert in my opinion is the best place to call it and the hook can be\n> something like this \"void xlog_insert_hook(NULL)\" as all the throttling\n> logic required is the current flush position which can be obtained\n> from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n>\n\nIMHO, it is not a good idea to call an external hook function inside a\ncritical section. Generally, we ensure that we do not call any code path\nwithin a critical section which can throw an error and if we start calling\nthe external hook then we lose that control. It should be blocked at the\noperation level itself e.g. ALTER TABLE READ ONLY, or by some other hook at\na little higher level.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:XLogInsert in my opinion is the best place to call it and the hook can be something like this \"void xlog_insert_hook(NULL)\" as all the throttling logic required is the current flush position which can be obtained from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.IMHO, it is not a good idea to call an external hook function inside a critical section.  Generally, we ensure that we do not call any code path within a critical section which can throw an error and if we start calling the external hook then we lose that control.  It should be blocked at the operation level itself e.g. ALTER TABLE READ ONLY, or by some other hook at a little higher level.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Dec 2021 16:43:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Fri, Dec 24, 2021 at 4:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:\n>>\n>> XLogInsert in my opinion is the best place to call it and the hook can be something like this \"void xlog_insert_hook(NULL)\" as all the throttling logic required is the current flush position which can be obtained from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n>\n> IMHO, it is not a good idea to call an external hook function inside a critical section. Generally, we ensure that we do not call any code path within a critical section which can throw an error and if we start calling the external hook then we lose that control. It should be blocked at the operation level itself e.g. ALTER TABLE READ ONLY, or by some other hook at a little higher level.\n\nYeah, good point. It's not advisable to give the control to the\nexternal module in the critical section. For instance, memory\nallocation isn't allowed (see [1]) and the ereport(ERROR,....) would\ntransform to PANIC inside the critical section (see [2], [3]).\nMoreover the critical section is to be short-spanned i.e. executing\nthe as minimal code as possible. There's no guarantee that an external\nmodule would follow these.\n\nI suggest we do it at the level of transaction start i.e. when a txnid\nis getting allocated i.e. in AssignTransactionId(). If we do this,\nwhen the limit for the throttling is exceeded, the current txn (even\nif it is a long running txn) continues to do the WAL insertions, the\nnext txns would get blocked. But this is okay and can be conveyed to\nthe users via documentation if need be. We do block txnid assignments\nfor parallel workers in this function, so this is a good choice IMO.\n\nThoughts?\n\n[1]\n/*\n * You should not do memory allocations within a critical section, because\n * an out-of-memory error will be escalated to a PANIC. To enforce that\n * rule, the allocation functions Assert that.\n */\n#define AssertNotInCriticalSection(context) \\\n Assert(CritSectionCount == 0 || (context)->allowInCritSection)\n\n[2]\n /*\n * If we are inside a critical section, all errors become PANIC\n * errors. See miscadmin.h.\n */\n if (CritSectionCount > 0)\n elevel = PANIC;\n\n[3]\n * A related, but conceptually distinct, mechanism is the \"critical section\"\n * mechanism. A critical section not only holds off cancel/die interrupts,\n * but causes any ereport(ERROR) or ereport(FATAL) to become ereport(PANIC)\n * --- that is, a system-wide reset is forced. Needless to say, only really\n * *critical* code should be marked as a critical section! Currently, this\n * mechanism is only used for XLOG-related code.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:31:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <\n> satyanarlapuram@gmail.com> wrote:\n>\n>>\n>>>\n>> XLogInsert in my opinion is the best place to call it and the hook can be\n>> something like this \"void xlog_insert_hook(NULL)\" as all the throttling\n>> logic required is the current flush position which can be obtained\n>> from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n>>\n>\n> IMHO, it is not a good idea to call an external hook function inside a\n> critical section. Generally, we ensure that we do not call any code path\n> within a critical section which can throw an error and if we start calling\n> the external hook then we lose that control.\n>\n\nThank you for the comment. XLogInsertRecord is inside a critical section\nbut not XLogInsert. Am I missing something?\n\n\n> It should be blocked at the operation level itself e.g. ALTER TABLE READ\n> ONLY, or by some other hook at a little higher level.\n>\n\nThere is a lot of maintenance overhead with a custom implementation at\nindividual databases and tables level. This doesn't provide the necessary\ncontrol that I am looking for.\n\n\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:XLogInsert in my opinion is the best place to call it and the hook can be something like this \"void xlog_insert_hook(NULL)\" as all the throttling logic required is the current flush position which can be obtained from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.IMHO, it is not a good idea to call an external hook function inside a critical section.  Generally, we ensure that we do not call any code path within a critical section which can throw an error and if we start calling the external hook then we lose that control.  Thank you for the comment. XLogInsertRecord is inside a critical section but not XLogInsert. Am I missing something? It should be blocked at the operation level itself e.g. ALTER TABLE READ ONLY, or by some other hook at a little higher level. There is a lot of maintenance overhead with a custom implementation at individual databases and tables level. This doesn't provide the necessary control that I am looking for. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 25 Dec 2021 14:22:42 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Sun, Dec 26, 2021 at 3:52 AM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>\n> On Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>> On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <\n>> satyanarlapuram@gmail.com> wrote:\n>>\n>>>\n>>>>\n>>> XLogInsert in my opinion is the best place to call it and the hook can\n>>> be something like this \"void xlog_insert_hook(NULL)\" as all the throttling\n>>> logic required is the current flush position which can be obtained\n>>> from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n>>>\n>>\n>> IMHO, it is not a good idea to call an external hook function inside a\n>> critical section. Generally, we ensure that we do not call any code path\n>> within a critical section which can throw an error and if we start calling\n>> the external hook then we lose that control.\n>>\n>\n> Thank you for the comment. XLogInsertRecord is inside a critical section\n> but not XLogInsert. Am I missing something?\n>\n\nActually all the WAL insertions are done under a critical section (except\nfew exceptions), that means if you see all the references of XLogInsert(),\nit is always called under the critical section and that is my main worry\nabout hooking at XLogInsert level.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sun, Dec 26, 2021 at 3:52 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:On Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:XLogInsert in my opinion is the best place to call it and the hook can be something like this \"void xlog_insert_hook(NULL)\" as all the throttling logic required is the current flush position which can be obtained from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.IMHO, it is not a good idea to call an external hook function inside a critical section.  Generally, we ensure that we do not call any code path within a critical section which can throw an error and if we start calling the external hook then we lose that control.  Thank you for the comment. XLogInsertRecord is inside a critical section but not XLogInsert. Am I missing something?Actually all the WAL insertions are done under a critical section (except few exceptions), that means if you see all the references of XLogInsert(), it is always called under the critical section and that is my main worry about hooking at XLogInsert level. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 26 Dec 2021 07:31:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Sat, Dec 25, 2021 at 6:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sun, Dec 26, 2021 at 3:52 AM SATYANARAYANA NARLAPURAM <\n> satyanarlapuram@gmail.com> wrote:\n>\n>>\n>>\n>> On Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com>\n>> wrote:\n>>\n>>> On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <\n>>> satyanarlapuram@gmail.com> wrote:\n>>>\n>>>>\n>>>>>\n>>>> XLogInsert in my opinion is the best place to call it and the hook can\n>>>> be something like this \"void xlog_insert_hook(NULL)\" as all the throttling\n>>>> logic required is the current flush position which can be obtained\n>>>> from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.\n>>>>\n>>>\n>>> IMHO, it is not a good idea to call an external hook function inside a\n>>> critical section. Generally, we ensure that we do not call any code path\n>>> within a critical section which can throw an error and if we start calling\n>>> the external hook then we lose that control.\n>>>\n>>\n>> Thank you for the comment. XLogInsertRecord is inside a critical section\n>> but not XLogInsert. Am I missing something?\n>>\n>\n> Actually all the WAL insertions are done under a critical section (except\n> few exceptions), that means if you see all the references of XLogInsert(),\n> it is always called under the critical section and that is my main worry\n> about hooking at XLogInsert level.\n>\n\nGot it, understood the concern. But can we document the limitations of the\nhook and let the hook take care of it? I don't expect an error to be thrown\nhere since we are not planning to allocate memory or make file system calls\nbut instead look at the shared memory state and add delays when required.\n\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Sat, Dec 25, 2021 at 6:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Sun, Dec 26, 2021 at 3:52 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:On Fri, Dec 24, 2021 at 3:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Fri, Dec 24, 2021 at 3:27 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:XLogInsert in my opinion is the best place to call it and the hook can be something like this \"void xlog_insert_hook(NULL)\" as all the throttling logic required is the current flush position which can be obtained from GetFlushRecPtr and the ReplicationSlotCtl. Attached a draft patch.IMHO, it is not a good idea to call an external hook function inside a critical section.  Generally, we ensure that we do not call any code path within a critical section which can throw an error and if we start calling the external hook then we lose that control.  Thank you for the comment. XLogInsertRecord is inside a critical section but not XLogInsert. Am I missing something?Actually all the WAL insertions are done under a critical section (except few exceptions), that means if you see all the references of XLogInsert(), it is always called under the critical section and that is my main worry about hooking at XLogInsert level.Got it, understood the concern. But can we document the limitations of the hook and let the hook take care of it? I don't expect an error to be thrown here since we are not planning to allocate memory or make file system calls but instead look at the shared memory state and add delays when required.  -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 25 Dec 2021 21:06:26 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Sun, Dec 26, 2021 at 1:06 PM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Got it, understood the concern. But can we document the limitations of the hook and let the hook take care of it? I don't expect an error to be thrown here since we are not planning to allocate memory or make file system calls but instead look at the shared memory state and add delays when required.\n\nIt wouldn't work. You can't make any assumption about how long it\nwould take for the replication lag to resolve, so you may have to wait\nfor a very long time. It means that at the very least the sleep has\nto be interruptible and therefore can raise an error. In general\nthere isn't much you can due in a critical section, so this approach\ndoesn't seem sensible to me.\n\n\n", "msg_date": "Sun, 26 Dec 2021 13:23:31 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>> Actually all the WAL insertions are done under a critical section (except\n>> few exceptions), that means if you see all the references of XLogInsert(),\n>> it is always called under the critical section and that is my main worry\n>> about hooking at XLogInsert level.\n>>\n>\n> Got it, understood the concern. But can we document the limitations of the\n> hook and let the hook take care of it? I don't expect an error to be thrown\n> here since we are not planning to allocate memory or make file system calls\n> but instead look at the shared memory state and add delays when required.\n>\n>\nYet another problem is that if we are in XlogInsert() that means we are\nholding the buffer locks on all the pages we have modified, so if we add a\nhook at that level which can make it wait then we would also block any of\nthe read operations needed to read from those buffers. I haven't thought\nwhat could be better way to do this but this is certainly not good.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Actually all the WAL insertions are done under a critical section (except few exceptions), that means if you see all the references of XLogInsert(), it is always called under the critical section and that is my main worry about hooking at XLogInsert level.Got it, understood the concern. But can we document the limitations of the hook and let the hook take care of it? I don't expect an error to be thrown here since we are not planning to allocate memory or make file system calls but instead look at the shared memory state and add delays when required.Yet another problem is that if we are in XlogInsert() that means we are holding the buffer locks on all the pages we have modified, so if we add a hook at that level which can make it wait then we would also block any of the read operations needed to read from those buffers.  I haven't thought what could be better way to do this but this is certainly not good. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 26 Dec 2021 10:55:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> satyanarlapuram@gmail.com> wrote:\n>\n>>\n>>> Actually all the WAL insertions are done under a critical section\n>>> (except few exceptions), that means if you see all the references of\n>>> XLogInsert(), it is always called under the critical section and that is my\n>>> main worry about hooking at XLogInsert level.\n>>>\n>>\n>> Got it, understood the concern. But can we document the limitations of\n>> the hook and let the hook take care of it? I don't expect an error to be\n>> thrown here since we are not planning to allocate memory or make file\n>> system calls but instead look at the shared memory state and add delays\n>> when required.\n>>\n>>\n> Yet another problem is that if we are in XlogInsert() that means we are\n> holding the buffer locks on all the pages we have modified, so if we add a\n> hook at that level which can make it wait then we would also block any of\n> the read operations needed to read from those buffers. I haven't thought\n> what could be better way to do this but this is certainly not good.\n>\n\nYes, this is a problem. The other approach is adding a hook at\nXLogWrite/XLogFlush? All the other backends will be waiting behind the\nWALWriteLock. The process that is performing the write enters into a busy\nloop with small delays until the criteria are met. Inability to process the\ninterrupts inside the critical section is a challenge in both approaches.\nAny other thoughts?\n\n\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Actually all the WAL insertions are done under a critical section (except few exceptions), that means if you see all the references of XLogInsert(), it is always called under the critical section and that is my main worry about hooking at XLogInsert level.Got it, understood the concern. But can we document the limitations of the hook and let the hook take care of it? I don't expect an error to be thrown here since we are not planning to allocate memory or make file system calls but instead look at the shared memory state and add delays when required.Yet another problem is that if we are in XlogInsert() that means we are holding the buffer locks on all the pages we have modified, so if we add a hook at that level which can make it wait then we would also block any of the read operations needed to read from those buffers.  I haven't thought what could be better way to do this but this is certainly not good.Yes, this is a problem. The other approach is adding a hook at XLogWrite/XLogFlush? All the other backends will be waiting behind the WALWriteLock. The process that is performing the write enters into a busy loop with small delays until the criteria are met. Inability to process the interrupts inside the critical section is a challenge in both approaches. Any other thoughts?  -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Dec 2021 16:40:28 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Greetings,\n\n* SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> > satyanarlapuram@gmail.com> wrote:\n> >>> Actually all the WAL insertions are done under a critical section\n> >>> (except few exceptions), that means if you see all the references of\n> >>> XLogInsert(), it is always called under the critical section and that is my\n> >>> main worry about hooking at XLogInsert level.\n> >>>\n> >>\n> >> Got it, understood the concern. But can we document the limitations of\n> >> the hook and let the hook take care of it? I don't expect an error to be\n> >> thrown here since we are not planning to allocate memory or make file\n> >> system calls but instead look at the shared memory state and add delays\n> >> when required.\n> >>\n> >>\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers. I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush? All the other backends will be waiting behind the\n> WALWriteLock. The process that is performing the write enters into a busy\n> loop with small delays until the criteria are met. Inability to process the\n> interrupts inside the critical section is a challenge in both approaches.\n> Any other thoughts?\n\nWhy not have this work the exact same way sync replicas do, except that\nit's based off of some byte/time lag for some set of async replicas?\nThat is, in RecordTransactionCommit(), perhaps right after the\nSyncRepWaitForLSN() call, or maybe even add this to that function? Sure\nseems like there's a lot of similarity.\n\nThanks,\n\nStephen", "msg_date": "Wed, 29 Dec 2021 08:46:40 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Stephen, thank you!\n\nOn Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> > On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> > > satyanarlapuram@gmail.com> wrote:\n> > >>> Actually all the WAL insertions are done under a critical section\n> > >>> (except few exceptions), that means if you see all the references of\n> > >>> XLogInsert(), it is always called under the critical section and\n> that is my\n> > >>> main worry about hooking at XLogInsert level.\n> > >>>\n> > >>\n> > >> Got it, understood the concern. But can we document the limitations of\n> > >> the hook and let the hook take care of it? I don't expect an error to\n> be\n> > >> thrown here since we are not planning to allocate memory or make file\n> > >> system calls but instead look at the shared memory state and add\n> delays\n> > >> when required.\n> > >>\n> > >>\n> > > Yet another problem is that if we are in XlogInsert() that means we are\n> > > holding the buffer locks on all the pages we have modified, so if we\n> add a\n> > > hook at that level which can make it wait then we would also block any\n> of\n> > > the read operations needed to read from those buffers. I haven't\n> thought\n> > > what could be better way to do this but this is certainly not good.\n> > >\n> >\n> > Yes, this is a problem. The other approach is adding a hook at\n> > XLogWrite/XLogFlush? All the other backends will be waiting behind the\n> > WALWriteLock. The process that is performing the write enters into a busy\n> > loop with small delays until the criteria are met. Inability to process\n> the\n> > interrupts inside the critical section is a challenge in both approaches.\n> > Any other thoughts?\n>\n> Why not have this work the exact same way sync replicas do, except that\n> it's based off of some byte/time lag for some set of async replicas?\n> That is, in RecordTransactionCommit(), perhaps right after the\n> SyncRepWaitForLSN() call, or maybe even add this to that function? Sure\n> seems like there's a lot of similarity.\n>\n\nI was thinking of achieving log governance (throttling WAL MB/sec) and also\nproviding RPO guarantees. In this model, it is hard to throttle WAL\ngeneration of a long running transaction (for example copy/select into).\nHowever, this meets my RPO needs. Are you in support of adding a hook or\nthe actual change? IMHO, the hook allows more creative options. I can go\nahead and make a patch accordingly.\n\n\n\n\n> Thanks,\n>\n> Stephen\n>\n\nStephen, thank you!On Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> > satyanarlapuram@gmail.com> wrote:\n> >>> Actually all the WAL insertions are done under a critical section\n> >>> (except few exceptions), that means if you see all the references of\n> >>> XLogInsert(), it is always called under the critical section and that is my\n> >>> main worry about hooking at XLogInsert level.\n> >>>\n> >>\n> >> Got it, understood the concern. But can we document the limitations of\n> >> the hook and let the hook take care of it? I don't expect an error to be\n> >> thrown here since we are not planning to allocate memory or make file\n> >> system calls but instead look at the shared memory state and add delays\n> >> when required.\n> >>\n> >>\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers.  I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush? All the other backends will be waiting behind the\n> WALWriteLock. The process that is performing the write enters into a busy\n> loop with small delays until the criteria are met. Inability to process the\n> interrupts inside the critical section is a challenge in both approaches.\n> Any other thoughts?\n\nWhy not have this work the exact same way sync replicas do, except that\nit's based off of some byte/time lag for some set of async replicas?\nThat is, in RecordTransactionCommit(), perhaps right after the\nSyncRepWaitForLSN() call, or maybe even add this to that function?  Sure\nseems like there's a lot of similarity.I was thinking of achieving log governance (throttling WAL MB/sec) and also providing RPO guarantees. In this model, it is hard to throttle WAL generation of a long running transaction (for example copy/select into). However, this meets my RPO needs. Are you in support of adding a hook or the actual change? IMHO, the hook allows more creative options. I can go ahead and make a patch accordingly. \nThanks,\n\nStephen", "msg_date": "Wed, 29 Dec 2021 11:04:12 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Greetings,\n\nOn Wed, Dec 29, 2021 at 14:04 SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n> Stephen, thank you!\n>\n> On Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> Greetings,\n>>\n>> * SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n>> > On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com>\n>> wrote:\n>> > > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n>> > > satyanarlapuram@gmail.com> wrote:\n>> > >>> Actually all the WAL insertions are done under a critical section\n>> > >>> (except few exceptions), that means if you see all the references of\n>> > >>> XLogInsert(), it is always called under the critical section and\n>> that is my\n>> > >>> main worry about hooking at XLogInsert level.\n>> > >>>\n>> > >>\n>> > >> Got it, understood the concern. But can we document the limitations\n>> of\n>> > >> the hook and let the hook take care of it? I don't expect an error\n>> to be\n>> > >> thrown here since we are not planning to allocate memory or make file\n>> > >> system calls but instead look at the shared memory state and add\n>> delays\n>> > >> when required.\n>> > >>\n>> > >>\n>> > > Yet another problem is that if we are in XlogInsert() that means we\n>> are\n>> > > holding the buffer locks on all the pages we have modified, so if we\n>> add a\n>> > > hook at that level which can make it wait then we would also block\n>> any of\n>> > > the read operations needed to read from those buffers. I haven't\n>> thought\n>> > > what could be better way to do this but this is certainly not good.\n>> > >\n>> >\n>> > Yes, this is a problem. The other approach is adding a hook at\n>> > XLogWrite/XLogFlush? All the other backends will be waiting behind the\n>> > WALWriteLock. The process that is performing the write enters into a\n>> busy\n>> > loop with small delays until the criteria are met. Inability to process\n>> the\n>> > interrupts inside the critical section is a challenge in both\n>> approaches.\n>> > Any other thoughts?\n>>\n>> Why not have this work the exact same way sync replicas do, except that\n>> it's based off of some byte/time lag for some set of async replicas?\n>> That is, in RecordTransactionCommit(), perhaps right after the\n>> SyncRepWaitForLSN() call, or maybe even add this to that function? Sure\n>> seems like there's a lot of similarity.\n>>\n>\n> I was thinking of achieving log governance (throttling WAL MB/sec) and\n> also providing RPO guarantees. In this model, it is hard to throttle WAL\n> generation of a long running transaction (for example copy/select into).\n>\n\nLong running transactions have a lot of downsides and are best discouraged.\nI don’t know that we should be designing this for that case specifically,\nparticularly given the complications it would introduce as discussed on\nthis thread already.\n\nHowever, this meets my RPO needs. Are you in support of adding a hook or\n> the actual change? IMHO, the hook allows more creative options. I can go\n> ahead and make a patch accordingly.\n>\n\nI would think this would make more sense as part of core rather than a\nhook, as that then requires an extension and additional setup to get going,\nwhich raises the bar quite a bit when it comes to actually being used.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Dec 29, 2021 at 14:04 SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Stephen, thank you!On Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> > satyanarlapuram@gmail.com> wrote:\n> >>> Actually all the WAL insertions are done under a critical section\n> >>> (except few exceptions), that means if you see all the references of\n> >>> XLogInsert(), it is always called under the critical section and that is my\n> >>> main worry about hooking at XLogInsert level.\n> >>>\n> >>\n> >> Got it, understood the concern. But can we document the limitations of\n> >> the hook and let the hook take care of it? I don't expect an error to be\n> >> thrown here since we are not planning to allocate memory or make file\n> >> system calls but instead look at the shared memory state and add delays\n> >> when required.\n> >>\n> >>\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers.  I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush? All the other backends will be waiting behind the\n> WALWriteLock. The process that is performing the write enters into a busy\n> loop with small delays until the criteria are met. Inability to process the\n> interrupts inside the critical section is a challenge in both approaches.\n> Any other thoughts?\n\nWhy not have this work the exact same way sync replicas do, except that\nit's based off of some byte/time lag for some set of async replicas?\nThat is, in RecordTransactionCommit(), perhaps right after the\nSyncRepWaitForLSN() call, or maybe even add this to that function?  Sure\nseems like there's a lot of similarity.I was thinking of achieving log governance (throttling WAL MB/sec) and also providing RPO guarantees. In this model, it is hard to throttle WAL generation of a long running transaction (for example copy/select into).Long running transactions have a lot of downsides and are best discouraged. I don’t know that we should be designing this for that case specifically, particularly given the complications it would introduce as discussed on this thread already.However, this meets my RPO needs. Are you in support of adding a hook or the actual change? IMHO, the hook allows more creative options. I can go ahead and make a patch accordingly.I would think this would make more sense as part of core rather than a hook, as that then requires an extension and additional setup to get going, which raises the bar quite a bit when it comes to actually being used.Thanks,Stephen", "msg_date": "Wed, 29 Dec 2021 14:16:07 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Dec 29, 2021 at 11:16 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> On Wed, Dec 29, 2021 at 14:04 SATYANARAYANA NARLAPURAM <\n> satyanarlapuram@gmail.com> wrote:\n>\n>> Stephen, thank you!\n>>\n>> On Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:\n>>\n>>> Greetings,\n>>>\n>>> * SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n>>> > On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com>\n>>> wrote:\n>>> > > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n>>> > > satyanarlapuram@gmail.com> wrote:\n>>> > >>> Actually all the WAL insertions are done under a critical section\n>>> > >>> (except few exceptions), that means if you see all the references\n>>> of\n>>> > >>> XLogInsert(), it is always called under the critical section and\n>>> that is my\n>>> > >>> main worry about hooking at XLogInsert level.\n>>> > >>>\n>>> > >>\n>>> > >> Got it, understood the concern. But can we document the limitations\n>>> of\n>>> > >> the hook and let the hook take care of it? I don't expect an error\n>>> to be\n>>> > >> thrown here since we are not planning to allocate memory or make\n>>> file\n>>> > >> system calls but instead look at the shared memory state and add\n>>> delays\n>>> > >> when required.\n>>> > >>\n>>> > >>\n>>> > > Yet another problem is that if we are in XlogInsert() that means we\n>>> are\n>>> > > holding the buffer locks on all the pages we have modified, so if we\n>>> add a\n>>> > > hook at that level which can make it wait then we would also block\n>>> any of\n>>> > > the read operations needed to read from those buffers. I haven't\n>>> thought\n>>> > > what could be better way to do this but this is certainly not good.\n>>> > >\n>>> >\n>>> > Yes, this is a problem. The other approach is adding a hook at\n>>> > XLogWrite/XLogFlush? All the other backends will be waiting behind the\n>>> > WALWriteLock. The process that is performing the write enters into a\n>>> busy\n>>> > loop with small delays until the criteria are met. Inability to\n>>> process the\n>>> > interrupts inside the critical section is a challenge in both\n>>> approaches.\n>>> > Any other thoughts?\n>>>\n>>> Why not have this work the exact same way sync replicas do, except that\n>>> it's based off of some byte/time lag for some set of async replicas?\n>>> That is, in RecordTransactionCommit(), perhaps right after the\n>>> SyncRepWaitForLSN() call, or maybe even add this to that function? Sure\n>>> seems like there's a lot of similarity.\n>>>\n>>\n>> I was thinking of achieving log governance (throttling WAL MB/sec) and\n>> also providing RPO guarantees. In this model, it is hard to throttle WAL\n>> generation of a long running transaction (for example copy/select into).\n>>\n>\n> Long running transactions have a lot of downsides and are best\n> discouraged. I don’t know that we should be designing this for that case\n> specifically, particularly given the complications it would introduce as\n> discussed on this thread already.\n>\n> However, this meets my RPO needs. Are you in support of adding a hook or\n>> the actual change? IMHO, the hook allows more creative options. I can go\n>> ahead and make a patch accordingly.\n>>\n>\n> I would think this would make more sense as part of core rather than a\n> hook, as that then requires an extension and additional setup to get going,\n> which raises the bar quite a bit when it comes to actually being used.\n>\n\nSounds good, I will work on making the changes accordingly.\n\n>\n> Thanks,\n>\n> Stephen\n>\n>>\n\nOn Wed, Dec 29, 2021 at 11:16 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,On Wed, Dec 29, 2021 at 14:04 SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Stephen, thank you!On Wed, Dec 29, 2021 at 5:46 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* SATYANARAYANA NARLAPURAM (satyanarlapuram@gmail.com) wrote:\n> On Sat, Dec 25, 2021 at 9:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sun, Dec 26, 2021 at 10:36 AM SATYANARAYANA NARLAPURAM <\n> > satyanarlapuram@gmail.com> wrote:\n> >>> Actually all the WAL insertions are done under a critical section\n> >>> (except few exceptions), that means if you see all the references of\n> >>> XLogInsert(), it is always called under the critical section and that is my\n> >>> main worry about hooking at XLogInsert level.\n> >>>\n> >>\n> >> Got it, understood the concern. But can we document the limitations of\n> >> the hook and let the hook take care of it? I don't expect an error to be\n> >> thrown here since we are not planning to allocate memory or make file\n> >> system calls but instead look at the shared memory state and add delays\n> >> when required.\n> >>\n> >>\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers.  I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush? All the other backends will be waiting behind the\n> WALWriteLock. The process that is performing the write enters into a busy\n> loop with small delays until the criteria are met. Inability to process the\n> interrupts inside the critical section is a challenge in both approaches.\n> Any other thoughts?\n\nWhy not have this work the exact same way sync replicas do, except that\nit's based off of some byte/time lag for some set of async replicas?\nThat is, in RecordTransactionCommit(), perhaps right after the\nSyncRepWaitForLSN() call, or maybe even add this to that function?  Sure\nseems like there's a lot of similarity.I was thinking of achieving log governance (throttling WAL MB/sec) and also providing RPO guarantees. In this model, it is hard to throttle WAL generation of a long running transaction (for example copy/select into).Long running transactions have a lot of downsides and are best discouraged. I don’t know that we should be designing this for that case specifically, particularly given the complications it would introduce as discussed on this thread already.However, this meets my RPO needs. Are you in support of adding a hook or the actual change? IMHO, the hook allows more creative options. I can go ahead and make a patch accordingly.I would think this would make more sense as part of core rather than a hook, as that then requires an extension and additional setup to get going, which raises the bar quite a bit when it comes to actually being used.Sounds good, I will work on making the changes accordingly.Thanks,Stephen", "msg_date": "Wed, 29 Dec 2021 11:22:21 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Hi,\n\nOn 2021-12-27 16:40:28 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers. I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush?\n\nThat's pretty much the same - XLogInsert() can trigger an\nXLogWrite()/Flush().\n\nI think it's a complete no-go to add throttling to these places. It's quite\npossible that it'd cause new deadlocks, and it's almost guaranteed to have\nunintended consequences (e.g. replication falling back further because\nXLogFlush() is being throttled).\n\nI also don't think it's a sane thing to add hooks to these places. It's\ncomplicated enough as-is, adding the chance for random other things to happen\nduring such crucial operations will make it even harder to maintain.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Dec 2021 11:31:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-12-27 16:40:28 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > Yet another problem is that if we are in XlogInsert() that means we are\n> > > holding the buffer locks on all the pages we have modified, so if we\n> add a\n> > > hook at that level which can make it wait then we would also block any\n> of\n> > > the read operations needed to read from those buffers. I haven't\n> thought\n> > > what could be better way to do this but this is certainly not good.\n> > >\n> >\n> > Yes, this is a problem. The other approach is adding a hook at\n> > XLogWrite/XLogFlush?\n>\n> That's pretty much the same - XLogInsert() can trigger an\n> XLogWrite()/Flush().\n>\n> I think it's a complete no-go to add throttling to these places. It's quite\n> possible that it'd cause new deadlocks, and it's almost guaranteed to have\n> unintended consequences (e.g. replication falling back further because\n> XLogFlush() is being throttled).\n>\n> I also don't think it's a sane thing to add hooks to these places. It's\n> complicated enough as-is, adding the chance for random other things to\n> happen\n> during such crucial operations will make it even harder to maintain.\n>\n\nAndres, thanks for the comments. Agreed on this based on the previous\ndiscussions on this thread. Could you please share your thoughts on adding\nit after SyncRepWaitForLSN()?\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-12-27 16:40:28 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > Yet another problem is that if we are in XlogInsert() that means we are\n> > holding the buffer locks on all the pages we have modified, so if we add a\n> > hook at that level which can make it wait then we would also block any of\n> > the read operations needed to read from those buffers.  I haven't thought\n> > what could be better way to do this but this is certainly not good.\n> >\n> \n> Yes, this is a problem. The other approach is adding a hook at\n> XLogWrite/XLogFlush?\n\nThat's pretty much the same - XLogInsert() can trigger an\nXLogWrite()/Flush().\n\nI think it's a complete no-go to add throttling to these places. It's quite\npossible that it'd cause new deadlocks, and it's almost guaranteed to have\nunintended consequences (e.g. replication falling back further because\nXLogFlush() is being throttled).\n\nI also don't think it's a sane thing to add hooks to these places. It's\ncomplicated enough as-is, adding the chance for random other things to happen\nduring such crucial operations will make it even harder to maintain.Andres, thanks for the comments. Agreed on this based on the previous discussions on this thread. Could you please share your thoughts on adding it after SyncRepWaitForLSN()? \n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 29 Dec 2021 11:34:53 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Hi,\n\nOn 2021-12-29 11:34:53 -0800, SATYANARAYANA NARLAPURAM wrote:\n> On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n> Andres, thanks for the comments. Agreed on this based on the previous\n> discussions on this thread. Could you please share your thoughts on adding\n> it after SyncRepWaitForLSN()?\n\nI don't think that's good either - you're delaying transaction commit\n(i.e. xact becoming visible / locks being released). That also has the danger\nof increasing lock contention (albeit more likely to be heavyweight locks /\nserializable state). It'd have to be after the transaction actually committed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Dec 2021 11:39:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-12-29 11:34:53 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > Andres, thanks for the comments. Agreed on this based on the previous\n> > discussions on this thread. Could you please share your thoughts on\n> adding\n> > it after SyncRepWaitForLSN()?\n>\n> I don't think that's good either - you're delaying transaction commit\n> (i.e. xact becoming visible / locks being released).\n\n\nAgree with that.\n\n\n> That also has the danger\n> of increasing lock contention (albeit more likely to be heavyweight locks /\n> serializable state). It'd have to be after the transaction actually\n> committed.\n>\n\nYeah, I think that would make sense, even though we will be allowing a new\nbackend to get connected insert WAL, and get committed but after that, it\nwill be throttled. However, if the number of max connections will be very\nhigh then even after we detected a lag there a significant amount WAL could\nbe generated, even if we keep long-running transactions aside. But I think\nstill it will serve the purpose of what Satya is trying to achieve.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Dec 30, 2021 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-12-29 11:34:53 -0800, SATYANARAYANA NARLAPURAM wrote:\n> On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n> Andres, thanks for the comments. Agreed on this based on the previous\n> discussions on this thread. Could you please share your thoughts on adding\n> it after SyncRepWaitForLSN()?\n\nI don't think that's good either - you're delaying transaction commit\n(i.e. xact becoming visible / locks being released).Agree with that.  That also has the danger\nof increasing lock contention (albeit more likely to be heavyweight locks /\nserializable state). It'd have to be after the transaction actually committed.Yeah, I think that would make sense, even though we will be allowing a new backend to get connected insert WAL, and get committed but after that, it will be throttled.  However, if the number of max connections will be very high then even after we detected a lag there a significant amount WAL could be generated, even if we keep long-running transactions aside.  But I think still it will serve the purpose of what Satya is trying to achieve.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 12:08:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Dec 29, 2021 at 10:38 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Dec 30, 2021 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2021-12-29 11:34:53 -0800, SATYANARAYANA NARLAPURAM wrote:\n>> > On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de>\n>> wrote:\n>> > Andres, thanks for the comments. Agreed on this based on the previous\n>> > discussions on this thread. Could you please share your thoughts on\n>> adding\n>> > it after SyncRepWaitForLSN()?\n>>\n>> I don't think that's good either - you're delaying transaction commit\n>> (i.e. xact becoming visible / locks being released).\n>\n>\n> Agree with that.\n>\n>\n>> That also has the danger\n>> of increasing lock contention (albeit more likely to be heavyweight locks\n>> /\n>> serializable state). It'd have to be after the transaction actually\n>> committed.\n>>\n>\n> Yeah, I think that would make sense, even though we will be allowing a new\n> backend to get connected insert WAL, and get committed but after that, it\n> will be throttled. However, if the number of max connections will be very\n> high then even after we detected a lag there a significant amount WAL could\n> be generated, even if we keep long-running transactions aside. But I think\n> still it will serve the purpose of what Satya is trying to achieve.\n>\n\nI am afraid there are problems with making the RPO check post releasing the\nlocks. By this time the transaction is committed and visible to the other\nbackends (ProcArrayEndTransaction is already called) though the intention\nis to block committing transactions that violate the defined RPO. Even\nthough we block existing connections starting a new transaction, it is\npossible to do writes by opening a new connection / canceling the query. I\nam not too much worried about the lock contention as the system is already\nhosed because of the policy. This behavior is very similar to what\nhappens when the Sync standby is not responding. Thoughts?\n\n\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Wed, Dec 29, 2021 at 10:38 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Thu, Dec 30, 2021 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-12-29 11:34:53 -0800, SATYANARAYANA NARLAPURAM wrote:\n> On Wed, Dec 29, 2021 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n> Andres, thanks for the comments. Agreed on this based on the previous\n> discussions on this thread. Could you please share your thoughts on adding\n> it after SyncRepWaitForLSN()?\n\nI don't think that's good either - you're delaying transaction commit\n(i.e. xact becoming visible / locks being released).Agree with that.  That also has the danger\nof increasing lock contention (albeit more likely to be heavyweight locks /\nserializable state). It'd have to be after the transaction actually committed.Yeah, I think that would make sense, even though we will be allowing a new backend to get connected insert WAL, and get committed but after that, it will be throttled.  However, if the number of max connections will be very high then even after we detected a lag there a significant amount WAL could be generated, even if we keep long-running transactions aside.  But I think still it will serve the purpose of what Satya is trying to achieve.I am afraid there are problems with making the RPO check post releasing the locks. By this time the transaction is committed and visible to the other backends (ProcArrayEndTransaction is already called) though the intention is to block committing transactions that violate the defined RPO. Even though we block existing connections starting a new transaction, it is possible to do writes by opening a new connection / canceling the query. I am not too much worried about the lock contention as the system is already hosed because of the policy. This behavior is very similar to what happens when the Sync standby is not responding. Thoughts? -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Dec 2021 23:06:31 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 12:36 PM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>> Yeah, I think that would make sense, even though we will be allowing a\n>> new backend to get connected insert WAL, and get committed but after that,\n>> it will be throttled. However, if the number of max connections will be\n>> very high then even after we detected a lag there a significant amount WAL\n>> could be generated, even if we keep long-running transactions aside. But I\n>> think still it will serve the purpose of what Satya is trying to achieve.\n>>\n>\n> I am afraid there are problems with making the RPO check post releasing\n> the locks. By this time the transaction is committed and visible to the\n> other backends (ProcArrayEndTransaction is already called) though the\n> intention is to block committing transactions that violate the defined RPO.\n> Even though we block existing connections starting a new transaction, it is\n> possible to do writes by opening a new connection / canceling the query. I\n> am not too much worried about the lock contention as the system is already\n> hosed because of the policy. This behavior is very similar to what\n> happens when the Sync standby is not responding. Thoughts?\n>\n\nYeah, that's true, but even if we are blocking the transactions from\ncommitting then also it is possible that a new connection can come and\ngenerate more WAL, yeah but I agree with the other part that if you\nthrottle after committing then the user can cancel the queries and generate\nmore WAL from those sessions as well. But that is an extreme case where\napplication developers want to bypass the throttling and want to generate\nmore WALs.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Dec 30, 2021 at 12:36 PM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Yeah, I think that would make sense, even though we will be allowing a new backend to get connected insert WAL, and get committed but after that, it will be throttled.  However, if the number of max connections will be very high then even after we detected a lag there a significant amount WAL could be generated, even if we keep long-running transactions aside.  But I think still it will serve the purpose of what Satya is trying to achieve.I am afraid there are problems with making the RPO check post releasing the locks. By this time the transaction is committed and visible to the other backends (ProcArrayEndTransaction is already called) though the intention is to block committing transactions that violate the defined RPO. Even though we block existing connections starting a new transaction, it is possible to do writes by opening a new connection / canceling the query. I am not too much worried about the lock contention as the system is already hosed because of the policy. This behavior is very similar to what happens when the Sync standby is not responding. Thoughts?Yeah, that's true, but even if we are blocking the transactions from committing then also it is possible that a new connection can come and generate more WAL,  yeah but I agree with the other part that if you throttle after committing then the user can cancel the queries and generate more WAL from those sessions as well.  But that is an extreme case where application developers want to bypass the throttling and want to generate more WALs.   -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 13:20:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 1:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Dec 30, 2021 at 12:36 PM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:\n>>>\n>>>\n>>> Yeah, I think that would make sense, even though we will be allowing a new backend to get connected insert WAL, and get committed but after that, it will be throttled. However, if the number of max connections will be very high then even after we detected a lag there a significant amount WAL could be generated, even if we keep long-running transactions aside. But I think still it will serve the purpose of what Satya is trying to achieve.\n>>\n>>\n>> I am afraid there are problems with making the RPO check post releasing the locks. By this time the transaction is committed and visible to the other backends (ProcArrayEndTransaction is already called) though the intention is to block committing transactions that violate the defined RPO. Even though we block existing connections starting a new transaction, it is possible to do writes by opening a new connection / canceling the query. I am not too much worried about the lock contention as the system is already hosed because of the policy. This behavior is very similar to what happens when the Sync standby is not responding. Thoughts?\n>\n>\n> Yeah, that's true, but even if we are blocking the transactions from committing then also it is possible that a new connection can come and generate more WAL, yeah but I agree with the other part that if you throttle after committing then the user can cancel the queries and generate more WAL from those sessions as well. But that is an extreme case where application developers want to bypass the throttling and want to generate more WALs.\n\nHow about having the new hook at the start of the new txn? If we do\nthis, when the limit for the throttling is exceeded, the current txn\n(even if it is a long running one) continues to do the WAL insertions,\nthe next txns would get blocked. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 30 Dec 2021 13:41:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 1:41 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n>\n> >\n> > Yeah, that's true, but even if we are blocking the transactions from\n> committing then also it is possible that a new connection can come and\n> generate more WAL, yeah but I agree with the other part that if you\n> throttle after committing then the user can cancel the queries and generate\n> more WAL from those sessions as well. But that is an extreme case where\n> application developers want to bypass the throttling and want to generate\n> more WALs.\n>\n> How about having the new hook at the start of the new txn? If we do\n> this, when the limit for the throttling is exceeded, the current txn\n> (even if it is a long running one) continues to do the WAL insertions,\n> the next txns would get blocked. Thoughts?\n>\n\nDo you mean while StartTransactionCommand or while assigning a new\ntransaction id? If it is at StartTransactionCommand then we would be\nblocking the sessions which are only performing read queries right? If we\nare doing at the transaction assignment level then we might be holding some\nof the locks so this might not be any better than throttling inside the\ncommit.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Dec 30, 2021 at 1:41 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Yeah, that's true, but even if we are blocking the transactions from committing then also it is possible that a new connection can come and generate more WAL,  yeah but I agree with the other part that if you throttle after committing then the user can cancel the queries and generate more WAL from those sessions as well.  But that is an extreme case where application developers want to bypass the throttling and want to generate more WALs.\n\nHow about having the new hook at the start of the new txn?  If we do\nthis, when the limit for the throttling is exceeded, the current txn\n(even if it is a long running one) continues to do the WAL insertions,\nthe next txns would get blocked. Thoughts?Do you mean while StartTransactionCommand or while assigning a new transaction id?  If it is at StartTransactionCommand then we would be blocking the sessions which are only performing read queries right?  If we are doing at the transaction assignment level then we might be holding some of the locks so this might not be any better than throttling inside the commit.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 13:49:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 12:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Dec 30, 2021 at 1:41 PM Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>>\n>> >\n>> > Yeah, that's true, but even if we are blocking the transactions from\n>> committing then also it is possible that a new connection can come and\n>> generate more WAL, yeah but I agree with the other part that if you\n>> throttle after committing then the user can cancel the queries and generate\n>> more WAL from those sessions as well. But that is an extreme case where\n>> application developers want to bypass the throttling and want to generate\n>> more WALs.\n>>\n>> How about having the new hook at the start of the new txn? If we do\n>> this, when the limit for the throttling is exceeded, the current txn\n>> (even if it is a long running one) continues to do the WAL insertions,\n>> the next txns would get blocked. Thoughts?\n>>\n>\n> Do you mean while StartTransactionCommand or while assigning a new\n> transaction id? If it is at StartTransactionCommand then we would be\n> blocking the sessions which are only performing read queries right?\n>\n\nDefinitely not at StartTransactionCommand but possibly while assigning\ntransaction Id inAssignTransactionId. Blocking readers is never the intent.\n\n\n> If we are doing at the transaction assignment level then we might be\n> holding some of the locks so this might not be any better than throttling\n> inside the commit.\n>\n\nIf we define RPO as no transaction can commit when the wal_distance is more\nthan configured MB, we had to throttle the writes before committing the\ntransaction and new WAL generation by new connections or active doesn't\nmatter as the transactions can't be committed and visible to the user. If\nthe RPO is defined as no new write transactions allowed when wal_distance >\nconfigured MB, then we can block assigning the new transaction IDs until\nthe RPO policy is met. IMHO, following the sync replication semantics is\neasier and more explainable as it is already familiar to the customers.\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Thu, Dec 30, 2021 at 12:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Thu, Dec 30, 2021 at 1:41 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Yeah, that's true, but even if we are blocking the transactions from committing then also it is possible that a new connection can come and generate more WAL,  yeah but I agree with the other part that if you throttle after committing then the user can cancel the queries and generate more WAL from those sessions as well.  But that is an extreme case where application developers want to bypass the throttling and want to generate more WALs.\n\nHow about having the new hook at the start of the new txn?  If we do\nthis, when the limit for the throttling is exceeded, the current txn\n(even if it is a long running one) continues to do the WAL insertions,\nthe next txns would get blocked. Thoughts?Do you mean while StartTransactionCommand or while assigning a new transaction id? If it is at StartTransactionCommand then we would be blocking the sessions which are only performing read queries right?  Definitely not at StartTransactionCommand but possibly while assigning transaction Id inAssignTransactionId. Blocking readers is never the intent. If we are doing at the transaction assignment level then we might be holding some of the locks so this might not be any better than throttling inside the commit.If we define RPO as no transaction can commit when the wal_distance is more than configured MB, we had to throttle the writes before committing the transaction and new WAL generation by new connections or active doesn't matter as the transactions can't be committed and visible to the user. If the RPO is defined as no new write transactions allowed when wal_distance > configured MB, then we can block assigning the new transaction IDs until the RPO policy is met. IMHO, following the sync replication semantics is easier and more explainable as it is already familiar to the customers. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 00:44:46 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Hi,\n\nOn 2021-12-29 23:06:31 -0800, SATYANARAYANA NARLAPURAM wrote:\n> I am afraid there are problems with making the RPO check post releasing the\n> locks. By this time the transaction is committed and visible to the other\n> backends (ProcArrayEndTransaction is already called) though the intention\n> is to block committing transactions that violate the defined RPO.\n\nShrug. Anything transaction based has way bigger holes than this.\n\n\n> Even though we block existing connections starting a new transaction, it is\n> possible to do writes by opening a new connection / canceling the query.\n\nIf your threat model is users explicitly trying to circumvent this they can\ncause problems much more easily. Trigger a bunch of vacuums, big COPYs etc.\n\n\n> I am not too much worried about the lock contention as the system is already\n> hosed because of the policy. This behavior is very similar to what happens\n> when the Sync standby is not responding. Thoughts?\n\nI don't see why we'd bury ourselves deeper in problems just because we already\nhave a problem. There's reasons why we want to do the delay for syncrep be\nbefore xact completion - but I don't see those applying to WAL throttling to a\nsignificant degree, particularly not when it's on a transaction level.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Dec 2021 11:26:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Dec 22, 2021 at 4:23 PM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n> Hi Hackers,\n>\n> I am considering implementing RPO (recovery point objective) enforcement\n> feature for Postgres where the WAL writes on the primary are stalled when\n> the WAL distance between the primary and standby exceeds the configured\n> (replica_lag_in_bytes) threshold. This feature is useful particularly in\n> the disaster recovery setups where primary and standby are in different\n> regions and synchronous replication can't be set up for latency and\n> performance reasons yet requires some level of RPO enforcement.\n>\n> The idea here is to calculate the lag between the primary and the standby\n> (Async?) server during XLogInsert and block the caller until the lag is\n> less than the threshold value. We can calculate the max lag by iterating\n> over ReplicationSlotCtl->replication_slots. If this is not something we\n> don't want to do in the core, at least adding a hook for XlogInsert is of\n> great value.\n>\n> A few other scenarios I can think of with the hook are:\n>\n> 1. Enforcing RPO as described above\n> 2. Enforcing rate limit and slow throttling when sync standby is\n> falling behind (could be flush lag or replay lag)\n> 3. Transactional log rate governance - useful for cloud providers to\n> provide SKU sizes based on allowed WAL writes.\n>\n> Thoughts?\n>\n\nVery similar requirement or need was discussed in the past in [1], not\nexactly RPO enforcement but large bulk operation/transaction negatively\nimpacting concurrent transactions due to replication lag.\nWould be good to refer to that thread as it explains the challenges for\nimplementing functionality mentioned in this thread. Mostly the challenge\nbeing no common place to code the throttling logic instead requiring calls\nto be sprinkled around in various parts.\n\n1]\nhttps://www.postgresql.org/message-id/flat/CA%2BU5nMLfxBgHQ1VLSeBHYEMjHXz_OHSkuFdU6_1quiGM0RNKEg%40mail.gmail.com\n\nOn Wed, Dec 22, 2021 at 4:23 PM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:Hi Hackers,I am considering implementing RPO (recovery point objective) enforcement feature for Postgres where the WAL writes on the primary are stalled when the WAL distance between the primary and standby exceeds the configured (replica_lag_in_bytes) threshold. This feature is useful particularly in the disaster recovery setups where primary and standby are in different regions and synchronous replication can't be set up for latency and performance reasons yet requires some level of RPO enforcement.The idea here is to calculate the lag between the primary and the standby (Async?) server during XLogInsert and block the caller until the lag is less than the threshold value. We can calculate the max lag by iterating over ReplicationSlotCtl->replication_slots. If this is not something we don't want to do in the core, at least adding a hook for XlogInsert is of great value.A few other scenarios I can think of with the hook are:Enforcing RPO as described aboveEnforcing rate limit and slow throttling when sync standby is falling behind (could be flush lag or replay lag)Transactional log rate governance - useful for cloud providers to provide SKU sizes based on allowed WAL writes.Thoughts?Very similar requirement or need was discussed in the past in [1], not exactly RPO enforcement but large bulk operation/transaction negatively impacting concurrent transactions due to replication lag.Would be good to refer to that thread as it explains the challenges for implementing functionality mentioned in this thread. Mostly the challenge being no common place to code the throttling logic instead requiring calls to be sprinkled around in various parts.1] https://www.postgresql.org/message-id/flat/CA%2BU5nMLfxBgHQ1VLSeBHYEMjHXz_OHSkuFdU6_1quiGM0RNKEg%40mail.gmail.com", "msg_date": "Mon, 3 Jan 2022 10:55:06 -0800", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "Hi,\n\nOn 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n> That's pretty much the same - XLogInsert() can trigger an\n> XLogWrite()/Flush().\n> \n> I think it's a complete no-go to add throttling to these places. It's quite\n> possible that it'd cause new deadlocks, and it's almost guaranteed to have\n> unintended consequences (e.g. replication falling back further because\n> XLogFlush() is being throttled).\n\nI thought of another way to implement this feature. What if we checked the\ncurrent distance somewhere within XLogInsert(), but only set\nInterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts() we\ncheck if XLogDelayPending is true and sleep the appropriate time.\n\nThat way the sleep doesn't happen with important locks held / within a\ncritical section, but we still delay close to where we went over the maximum\nlag. And the overhead should be fairly minimal.\n\n\nI'm doubtful that implementing the waits on a transactional level provides a\nmeaningful enough amount of control - there's just too much WAL that can be\ngenerated within a transaction.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Jan 2022 09:46:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Jan 5, 2022 at 11:16 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n> > That's pretty much the same - XLogInsert() can trigger an\n> > XLogWrite()/Flush().\n> >\n> > I think it's a complete no-go to add throttling to these places. It's quite\n> > possible that it'd cause new deadlocks, and it's almost guaranteed to have\n> > unintended consequences (e.g. replication falling back further because\n> > XLogFlush() is being throttled).\n>\n> I thought of another way to implement this feature. What if we checked the\n> current distance somewhere within XLogInsert(), but only set\n> InterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts() we\n> check if XLogDelayPending is true and sleep the appropriate time.\n>\n> That way the sleep doesn't happen with important locks held / within a\n> critical section, but we still delay close to where we went over the maximum\n> lag. And the overhead should be fairly minimal.\n\n+1, this sounds like a really good idea to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 09:31:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Jan 5, 2022 at 9:46 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n> > That's pretty much the same - XLogInsert() can trigger an\n> > XLogWrite()/Flush().\n> >\n> > I think it's a complete no-go to add throttling to these places. It's\n> quite\n> > possible that it'd cause new deadlocks, and it's almost guaranteed to\n> have\n> > unintended consequences (e.g. replication falling back further because\n> > XLogFlush() is being throttled).\n>\n> I thought of another way to implement this feature. What if we checked the\n> current distance somewhere within XLogInsert(), but only set\n> InterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts()\n> we\n> check if XLogDelayPending is true and sleep the appropriate time.\n>\n> That way the sleep doesn't happen with important locks held / within a\n> critical section, but we still delay close to where we went over the\n> maximum\n> lag. And the overhead should be fairly minimal.\n>\n\n+1 to the idea, this way we can fairly throttle large and\nsmaller transactions the same way. I will work on this model and share the\npatch. Please note that the lock contention still exists in this case.\n\n\n> I'm doubtful that implementing the waits on a transactional level provides\n> a\n> meaningful enough amount of control - there's just too much WAL that can be\n> generated within a transaction.\n>\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Wed, Jan 5, 2022 at 9:46 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n> That's pretty much the same - XLogInsert() can trigger an\n> XLogWrite()/Flush().\n> \n> I think it's a complete no-go to add throttling to these places. It's quite\n> possible that it'd cause new deadlocks, and it's almost guaranteed to have\n> unintended consequences (e.g. replication falling back further because\n> XLogFlush() is being throttled).\n\nI thought of another way to implement this feature. What if we checked the\ncurrent distance somewhere within XLogInsert(), but only set\nInterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts() we\ncheck if XLogDelayPending is true and sleep the appropriate time.\n\nThat way the sleep doesn't happen with important locks held / within a\ncritical section, but we still delay close to where we went over the maximum\nlag. And the overhead should be fairly minimal.+1 to the idea, this way we can fairly throttle large and smaller transactions the same way. I will work on this model and share the patch. Please note that the lock contention still exists in this case. \nI'm doubtful that implementing the waits on a transactional level provides a\nmeaningful enough amount of control - there's just too much WAL that can be\ngenerated within a transaction. \n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 5 Jan 2022 21:56:56 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Thu, Jan 6, 2022 at 11:27 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n\n> On Wed, Jan 5, 2022 at 9:46 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n>> > That's pretty much the same - XLogInsert() can trigger an\n>> > XLogWrite()/Flush().\n>> >\n>> > I think it's a complete no-go to add throttling to these places. It's quite\n>> > possible that it'd cause new deadlocks, and it's almost guaranteed to have\n>> > unintended consequences (e.g. replication falling back further because\n>> > XLogFlush() is being throttled).\n>>\n>> I thought of another way to implement this feature. What if we checked the\n>> current distance somewhere within XLogInsert(), but only set\n>> InterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts() we\n>> check if XLogDelayPending is true and sleep the appropriate time.\n>>\n>> That way the sleep doesn't happen with important locks held / within a\n>> critical section, but we still delay close to where we went over the maximum\n>> lag. And the overhead should be fairly minimal.\n>\n>\n> +1 to the idea, this way we can fairly throttle large and smaller transactions the same way. I will work on this model and share the patch. Please note that the lock contention still exists in this case.\n\nGenerally while checking for the interrupt we should not be holding\nany lwlock that means we don't have the risk of holding any buffer\nlocks, so any other reader can continue to read from those buffers.\nWe will only be holding some heavyweight locks like relation/tuple\nlock but that will not impact anyone except the writers trying to\nupdate the same tuple or the DDL who want to modify the table\ndefinition so I don't think we have any issue here because anyway we\ndon't want other writers to continue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 11:35:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Wed, Jan 5, 2022 at 10:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Jan 6, 2022 at 11:27 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n>\n> > On Wed, Jan 5, 2022 at 9:46 AM Andres Freund <andres@anarazel.de> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n> >> > That's pretty much the same - XLogInsert() can trigger an\n> >> > XLogWrite()/Flush().\n> >> >\n> >> > I think it's a complete no-go to add throttling to these places. It's\n> quite\n> >> > possible that it'd cause new deadlocks, and it's almost guaranteed to\n> have\n> >> > unintended consequences (e.g. replication falling back further because\n> >> > XLogFlush() is being throttled).\n> >>\n> >> I thought of another way to implement this feature. What if we checked\n> the\n> >> current distance somewhere within XLogInsert(), but only set\n> >> InterruptPending=true, XLogDelayPending=true. Then in\n> ProcessInterrupts() we\n> >> check if XLogDelayPending is true and sleep the appropriate time.\n> >>\n> >> That way the sleep doesn't happen with important locks held / within a\n> >> critical section, but we still delay close to where we went over the\n> maximum\n> >> lag. And the overhead should be fairly minimal.\n> >\n> >\n> > +1 to the idea, this way we can fairly throttle large and smaller\n> transactions the same way. I will work on this model and share the patch.\n> Please note that the lock contention still exists in this case.\n>\n> Generally while checking for the interrupt we should not be holding\n> any lwlock that means we don't have the risk of holding any buffer\n> locks, so any other reader can continue to read from those buffers.\n> We will only be holding some heavyweight locks like relation/tuple\n> lock but that will not impact anyone except the writers trying to\n> update the same tuple or the DDL who want to modify the table\n> definition so I don't think we have any issue here because anyway we\n> don't want other writers to continue.\n>\n\nYes, it should be ok. I was just making it explicit on Andres' previous\ncomment on holding the heavyweight locks when wait before the commit.\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Wed, Jan 5, 2022 at 10:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Thu, Jan 6, 2022 at 11:27 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n\n> On Wed, Jan 5, 2022 at 9:46 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2021-12-29 11:31:51 -0800, Andres Freund wrote:\n>> > That's pretty much the same - XLogInsert() can trigger an\n>> > XLogWrite()/Flush().\n>> >\n>> > I think it's a complete no-go to add throttling to these places. It's quite\n>> > possible that it'd cause new deadlocks, and it's almost guaranteed to have\n>> > unintended consequences (e.g. replication falling back further because\n>> > XLogFlush() is being throttled).\n>>\n>> I thought of another way to implement this feature. What if we checked the\n>> current distance somewhere within XLogInsert(), but only set\n>> InterruptPending=true, XLogDelayPending=true. Then in ProcessInterrupts() we\n>> check if XLogDelayPending is true and sleep the appropriate time.\n>>\n>> That way the sleep doesn't happen with important locks held / within a\n>> critical section, but we still delay close to where we went over the maximum\n>> lag. And the overhead should be fairly minimal.\n>\n>\n> +1 to the idea, this way we can fairly throttle large and smaller transactions the same way. I will work on this model and share the patch. Please note that the lock contention still exists in this case.\n\nGenerally while checking for the interrupt we should not be holding\nany lwlock that means we don't have the risk of holding any buffer\nlocks, so any other reader can continue to read from those buffers.\nWe will only be holding some heavyweight locks like relation/tuple\nlock but that will not impact anyone except the writers trying to\nupdate the same tuple or the DDL who want to modify the table\ndefinition so I don't think we have any issue here because anyway we\ndon't want other writers to continue.Yes, it should be ok. I was just making it explicit on Andres' previous comment on holding the heavyweight locks when wait before the commit.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 5 Jan 2022 23:00:10 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "I noticed this thread and thought I'd share my experiences building\r\nsomething similar for Multi-AZ DB clusters [0]. It's not a strict RPO\r\nmechanism, but it does throttle backends in an effort to keep the\r\nreplay lag below a configured maximum. I can share the code if there\r\nis interest.\r\n\r\nI wrote it as a new extension, and except for one piece that I'll go\r\ninto later, I was able to avoid changes to core PostgreSQL code. The\r\nextension manages a background worker that periodically assesses the\r\nstate of the designated standbys and updates an atomic in shared\r\nmemory that indicates how long to delay. A transaction callback\r\nchecks this value and sleeps as necessary. Delay can be injected for\r\nwrite-enabled transactions on the primary, read-only transactions on\r\nthe standbys, or both. The extension is heavily configurable so that\r\nit can meet the needs of a variety of workloads.\r\n\r\nOne interesting challenge I encountered was accurately determining the\r\namount of replay lag. The problem was twofold. First, if there is no\r\nactivity on the primary, there will be nothing to replay on the\r\nstandbys, so the replay lag will appear to grow unbounded. To work\r\naround this, the extension's background worker periodically creates an\r\nempty COMMIT record. Second, if a standby reconnects after a long\r\ntime, the replay lag won't be accurate for some time. Instead, the\r\nreplay lag will slowly increase until it reaches the correct value.\r\nSince the delay calculation looks at the trend of the replay lag, this\r\napparent unbounded growth causes it to inject far more delay than is\r\nnecessary. My guess is that this is related to 9ea3c64, and maybe it\r\nis worth rethinking that logic. For now, the extension just\r\nperiodically reports the value of GetLatestXTime() from the standbys\r\nto the primary to get an accurate reading. This is done via a new\r\nreplication callback mechanism (which requires core PostgreSQL\r\nchanges). I can share this patch along with the extension, as I bet\r\nthere are other applications for it.\r\n\r\nI should also note that the extension only considers \"active\" standbys\r\nand primaries. That is, ones with an active WAL sender or WAL\r\nreceiver. This avoids the need to guess what should be done during a\r\nnetwork partition, but it also means that we must gracefully handle\r\nstandbys reconnecting with massive amounts of lag. The extension is\r\ndesigned to slowly ramp up the amount of injected delay until the\r\nstandby's apply lag is trending down at a sufficient rate.\r\n\r\nI see that an approach was suggested upthread for throttling based on\r\nWAL distance instead of per-transaction. While the transaction\r\napproach works decently well for certain workloads (e.g., many small\r\ntransactions like those from pgbench), it might require further tuning\r\nfor very large transactions or workloads with a variety of transaction\r\nsizes. For that reason, I would definitely support building a way to\r\nthrottle based on WAL generation. It might be a good idea to avoid\r\nthrottling critical activity such as anti-wraparound vacuuming, too.\r\n\r\nNathan\r\n\r\n[0] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 00:06:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the\n configured replica_lag_in_bytes" }, { "msg_contents": "\n\nOn 11.01.2022 03:06, Bossart, Nathan wrote:\n> I noticed this thread and thought I'd share my experiences building\n> something similar for Multi-AZ DB clusters [0]. It's not a strict RPO\n> mechanism, but it does throttle backends in an effort to keep the\n> replay lag below a configured maximum. I can share the code if there\n> is interest.\n>\n> I wrote it as a new extension, and except for one piece that I'll go\n> into later, I was able to avoid changes to core PostgreSQL code. The\n> extension manages a background worker that periodically assesses the\n> state of the designated standbys and updates an atomic in shared\n> memory that indicates how long to delay. A transaction callback\n> checks this value and sleeps as necessary. Delay can be injected for\n> write-enabled transactions on the primary, read-only transactions on\n> the standbys, or both. The extension is heavily configurable so that\n> it can meet the needs of a variety of workloads.\n>\n> One interesting challenge I encountered was accurately determining the\n> amount of replay lag. The problem was twofold. First, if there is no\n> activity on the primary, there will be nothing to replay on the\n> standbys, so the replay lag will appear to grow unbounded. To work\n> around this, the extension's background worker periodically creates an\n> empty COMMIT record. Second, if a standby reconnects after a long\n> time, the replay lag won't be accurate for some time. Instead, the\n> replay lag will slowly increase until it reaches the correct value.\n> Since the delay calculation looks at the trend of the replay lag, this\n> apparent unbounded growth causes it to inject far more delay than is\n> necessary. My guess is that this is related to 9ea3c64, and maybe it\n> is worth rethinking that logic. For now, the extension just\n> periodically reports the value of GetLatestXTime() from the standbys\n> to the primary to get an accurate reading. This is done via a new\n> replication callback mechanism (which requires core PostgreSQL\n> changes). I can share this patch along with the extension, as I bet\n> there are other applications for it.\n>\n> I should also note that the extension only considers \"active\" standbys\n> and primaries. That is, ones with an active WAL sender or WAL\n> receiver. This avoids the need to guess what should be done during a\n> network partition, but it also means that we must gracefully handle\n> standbys reconnecting with massive amounts of lag. The extension is\n> designed to slowly ramp up the amount of injected delay until the\n> standby's apply lag is trending down at a sufficient rate.\n>\n> I see that an approach was suggested upthread for throttling based on\n> WAL distance instead of per-transaction. While the transaction\n> approach works decently well for certain workloads (e.g., many small\n> transactions like those from pgbench), it might require further tuning\n> for very large transactions or workloads with a variety of transaction\n> sizes. For that reason, I would definitely support building a way to\n> throttle based on WAL generation. It might be a good idea to avoid\n> throttling critical activity such as anti-wraparound vacuuming, too.\n>\n> Nathan\n>\n> [0] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html\n>\n\nWe have faced with the similar problem in Zenith (open source Aurora) \nand have to implement back pressure mechanism to prevent overflow of WAL \nat stateless compute nodes\nand too long delays of [age reconstruction. Our implementation is the \nfollowing:\n1. Three GUCs are added: max_replication_write/flush/apply_lag\n2. Replication lags are checked in XLogInsert and if one of 3 thresholds \nis reached then InterruptPending is set.\n3. In ProcessInterrupts we block backend execution until lag is within \nspecified boundary:\n\n     #define BACK_PRESSURE_DELAY 10000L // 0.01 sec\n     while(true)\n     {\n         ProcessInterrupts_pg();\n\n         // Suspend writers until replicas catch up\n         lag = backpressure_lag();\n         if (lag <= 0)\n             break;\n\n         set_ps_display(\"backpressure throttling\");\n\n         elog(DEBUG2, \"backpressure throttling: lag %lu\", lag);\n         pg_usleep(BACK_PRESSURE_DELAY);\n     }\n\nWhat is wrong here is that backend can be blocked for a long time \n(causing failure of client application due to timeout expiration) and \nhold acquired locks while sleeping.\nWe are thinking about smarter way of choosing throttling delay (for \nexample exponential increase of throttling sleep interval until some \nmaximal value is reached).\nBut it is really hard to find some universal schema which will be good \nfor all use cases (for example in case of short living session, which \nclients are connected to the server to execute just one query).\n\nConcerning throttling at the end of transaction which eliminates problem \nwith holding locks and do not require changes in postgres core, \nunfortunately it doesn't address problem with large transactions (for \nexample bulk load of data using COPY). In this case just one transaction \ncan cause arbitrary large lag.\n\nI am not sure how critical is the problems with holding locks during \nthrottling: yes, it may block other database activity, including vacuum \nand execution of read-only queries.\nBut it should not block walsender and so cause deadlock. And in most \ncases read-only transactions are not conflicting with write transaction, \nso suspending write transaction\nshould not block readers.\n\nAnother problem with throttling is large WAL records (for example custom \nlogical replication WAL record can be arbitrary large). If such record \nis larger than replication lag limit,\nthen it can cause deadlock.\n\n\n", "msg_date": "Tue, 11 Jan 2022 11:41:26 +0300", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" }, { "msg_contents": "On Tue, Jan 11, 2022 at 2:11 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n>\n> We have faced with the similar problem in Zenith (open source Aurora)\n> and have to implement back pressure mechanism to prevent overflow of WAL\n> at stateless compute nodes\n> and too long delays of [age reconstruction. Our implementation is the\n> following:\n> 1. Three GUCs are added: max_replication_write/flush/apply_lag\n> 2. Replication lags are checked in XLogInsert and if one of 3 thresholds\n> is reached then InterruptPending is set.\n> 3. In ProcessInterrupts we block backend execution until lag is within\n> specified boundary:\n>\n> #define BACK_PRESSURE_DELAY 10000L // 0.01 sec\n> while(true)\n> {\n> ProcessInterrupts_pg();\n>\n> // Suspend writers until replicas catch up\n> lag = backpressure_lag();\n> if (lag <= 0)\n> break;\n>\n> set_ps_display(\"backpressure throttling\");\n>\n> elog(DEBUG2, \"backpressure throttling: lag %lu\", lag);\n> pg_usleep(BACK_PRESSURE_DELAY);\n> }\n>\n> What is wrong here is that backend can be blocked for a long time\n> (causing failure of client application due to timeout expiration) and\n> hold acquired locks while sleeping.\n\nDo we ever call CHECK_FOR_INTERRUPTS() while holding \"important\"\nlocks? I haven't seen any asserts or anything of that sort in\nProcessInterrupts() though, looks like it's the caller's\nresponsibility to not process interrupts while holding heavy weight\nlocks, here are some points on this upthread [1].\n\nI don't think we have problem with various postgres timeouts\nstatement_timeout, lock_timeout, idle_in_transaction_session_timeout,\nidle_session_timeout, client_connection_check_interval, because while\nwe wait for replication lag to get better in ProcessInterrupts(). I\nthink SIGALRM can be raised while we wait for replication lag to get\nbetter, but it can't be handled. Why can't we just disable these\ntimeouts before going to wait and reset/enable right after the\nreplication lag gets better?\n\nAnd the clients can always have their own\nno-reply-kill-transaction-sort-of-timeout, if yes, let them fail and\ndeal with it. I don't think we can do much about this.\n\n> We are thinking about smarter way of choosing throttling delay (for\n> example exponential increase of throttling sleep interval until some\n> maximal value is reached).\n> But it is really hard to find some universal schema which will be good\n> for all use cases (for example in case of short living session, which\n> clients are connected to the server to execute just one query).\n\nI think there has to be an upper limit to wait, perhaps a\n'preconfigured amount of time'. I think others upthread aren't happy\nwith failing transactions because of the replication lag. But, my\npoint is how much time we would let the backends wait or throttle WAL\nwrites? It mustn't be forever (say if a broken connection to the async\nstandby is found).\n\n[1] https://www.postgresql.org/message-id/20220105174643.lozdd3radxv4tlmx%40alap3.anarazel.de\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 23 Apr 2022 12:29:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Throttling WAL inserts when the standby falls behind more than\n the configured replica_lag_in_bytes" } ]
[ { "msg_contents": "Hi,\n\nCurrently the end-of-recovery checkpoint can be much slower, impacting\nthe server availability, if there are many replication slot files\nXXXX.snap or map-XXXX to be enumerated and deleted. How about skipping\nthe .snap and map- file handling during the end-of-recovery\ncheckpoint? It makes the server available faster and the next regular\ncheckpoint can deal with these files. If required, we can have a GUC\n(skip_replication_slot_file_handling or some other better name) to\ncontrol this default being the existing behavior.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 23 Dec 2021 16:46:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "skip replication slot snapshot/map file removal during\n end-of-recovery checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021 at 4:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently the end-of-recovery checkpoint can be much slower, impacting\n> the server availability, if there are many replication slot files\n> XXXX.snap or map-XXXX to be enumerated and deleted. How about skipping\n> the .snap and map- file handling during the end-of-recovery\n> checkpoint? It makes the server available faster and the next regular\n> checkpoint can deal with these files. If required, we can have a GUC\n> (skip_replication_slot_file_handling or some other better name) to\n> control this default being the existing behavior.\n>\n> Thoughts?\n\nHere's the v1 patch, please review it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 31 Dec 2021 11:44:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: skip replication slot snapshot/map file removal during\n end-of-recovery checkpoint" }, { "msg_contents": "On 12/23/21, 3:17 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Currently the end-of-recovery checkpoint can be much slower, impacting\r\n> the server availability, if there are many replication slot files\r\n> XXXX.snap or map-XXXX to be enumerated and deleted. How about skipping\r\n> the .snap and map- file handling during the end-of-recovery\r\n> checkpoint? It makes the server available faster and the next regular\r\n> checkpoint can deal with these files. If required, we can have a GUC\r\n> (skip_replication_slot_file_handling or some other better name) to\r\n> control this default being the existing behavior.\r\n\r\nI suggested something similar as a possibility in the other thread\r\nwhere these tasks are being discussed [0]. I think it is worth\r\nconsidering, but IMO it is not a complete solution to the problem. If\r\nthere are frequently many such files to delete and regular checkpoints\r\nare taking longer, the shutdown/end-of-recovery checkpoint could still\r\ntake a while. I think it would be better to separate these tasks from\r\ncheckpointing instead.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/A285A823-0AF2-4376-838E-847FA4710F9A%40amazon.com\r\n\r\n", "msg_date": "Wed, 5 Jan 2022 23:34:01 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: skip replication slot snapshot/map file removal during\n end-of-recovery\n checkpoint" }, { "msg_contents": "On Thu, Jan 6, 2022 at 5:04 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/23/21, 3:17 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Currently the end-of-recovery checkpoint can be much slower, impacting\n> > the server availability, if there are many replication slot files\n> > XXXX.snap or map-XXXX to be enumerated and deleted. How about skipping\n> > the .snap and map- file handling during the end-of-recovery\n> > checkpoint? It makes the server available faster and the next regular\n> > checkpoint can deal with these files. If required, we can have a GUC\n> > (skip_replication_slot_file_handling or some other better name) to\n> > control this default being the existing behavior.\n>\n> I suggested something similar as a possibility in the other thread\n> where these tasks are being discussed [0]. I think it is worth\n> considering, but IMO it is not a complete solution to the problem. If\n> there are frequently many such files to delete and regular checkpoints\n> are taking longer, the shutdown/end-of-recovery checkpoint could still\n> take a while. I think it would be better to separate these tasks from\n> checkpointing instead.\n>\n> [0] https://postgr.es/m/A285A823-0AF2-4376-838E-847FA4710F9A%40amazon.com\n\nThanks. I agree to solve it as part of the other thread and close this\nthread here.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 8 Jan 2022 10:11:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: skip replication slot snapshot/map file removal during\n end-of-recovery checkpoint" } ]
[ { "msg_contents": "Hi,\n\npg_control_checkpoint emits 18 columns whereas the values and nulls\narrays are defined to be of size 19. Although it's not critical,\nattaching a tiny patch to fix this.\n\ndiff --git a/src/backend/utils/misc/pg_controldata.c\nb/src/backend/utils/misc/pg_controldata.c\nindex 209a20a882..b1db9a8d07 100644\n--- a/src/backend/utils/misc/pg_controldata.c\n+++ b/src/backend/utils/misc/pg_controldata.c\n@@ -79,8 +79,8 @@ pg_control_system(PG_FUNCTION_ARGS)\n Datum\n pg_control_checkpoint(PG_FUNCTION_ARGS)\n {\n- Datum values[19];\n- bool nulls[19];\n+ Datum values[18];\n+ bool nulls[18];\n TupleDesc tupdesc;\n HeapTuple htup;\n ControlFileData *ControlFile;\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 23 Dec 2021 17:09:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "correct the sizes of values and nulls arrays in pg_control_checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021, at 8:39 AM, Bharath Rupireddy wrote:\n> pg_control_checkpoint emits 18 columns whereas the values and nulls\n> arrays are defined to be of size 19. Although it's not critical,\n> attaching a tiny patch to fix this.\nGood catch! I'm wondering if a constant wouldn't be useful for such case.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Dec 23, 2021, at 8:39 AM, Bharath Rupireddy wrote:pg_control_checkpoint emits 18 columns whereas the values and nullsarrays are defined to be of size 19. Although it's not critical,attaching a tiny patch to fix this.Good catch! I'm wondering if a constant wouldn't be useful for such case.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 23 Dec 2021 12:43:02 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: correct the sizes of values and nulls arrays in\n pg_control_checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021 at 9:13 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Dec 23, 2021, at 8:39 AM, Bharath Rupireddy wrote:\n>\n> pg_control_checkpoint emits 18 columns whereas the values and nulls\n> arrays are defined to be of size 19. Although it's not critical,\n> attaching a tiny patch to fix this.\n>\n> Good catch! I'm wondering if a constant wouldn't be useful for such case.\n\nThanks. I thought of having a macro, but it creates a lot of diff with\nthe previous versions as we have to change for other pg_control_XXX\nfunctions.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 23 Dec 2021 21:16:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: correct the sizes of values and nulls arrays in\n pg_control_checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021 at 9:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Dec 23, 2021 at 9:13 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Thu, Dec 23, 2021, at 8:39 AM, Bharath Rupireddy wrote:\n> >\n> > pg_control_checkpoint emits 18 columns whereas the values and nulls\n> > arrays are defined to be of size 19. Although it's not critical,\n> > attaching a tiny patch to fix this.\n> >\n> > Good catch! I'm wondering if a constant wouldn't be useful for such case.\n>\n> Thanks. I thought of having a macro, but it creates a lot of diff with\n> the previous versions as we have to change for other pg_control_XXX\n> functions.\n\nI've added a CF entry to not lose track -\nhttps://commitfest.postgresql.org/36/3475/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 24 Dec 2021 18:03:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: correct the sizes of values and nulls arrays in\n pg_control_checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021 at 05:09:28PM +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> pg_control_checkpoint emits 18 columns whereas the values and nulls\n> arrays are defined to be of size 19. Although it's not critical,\n> attaching a tiny patch to fix this.\n\nLGTM\n\nIt's helpful to check the history to find where the error was introduced:\n\n4b0d28de06b28e57c540fca458e4853854fbeaf8\n2ede45c3a49e484edfa143850d55eb32dba296de\n\n> diff --git a/src/backend/utils/misc/pg_controldata.c\n> b/src/backend/utils/misc/pg_controldata.c\n> index 209a20a882..b1db9a8d07 100644\n> --- a/src/backend/utils/misc/pg_controldata.c\n> +++ b/src/backend/utils/misc/pg_controldata.c\n> @@ -79,8 +79,8 @@ pg_control_system(PG_FUNCTION_ARGS)\n> Datum\n> pg_control_checkpoint(PG_FUNCTION_ARGS)\n> {\n> - Datum values[19];\n> - bool nulls[19];\n> + Datum values[18];\n> + bool nulls[18];\n> TupleDesc tupdesc;\n> HeapTuple htup;\n> ControlFileData *ControlFile;\n\n\n", "msg_date": "Sat, 25 Dec 2021 18:20:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: correct the sizes of values and nulls arrays in\n pg_control_checkpoint" }, { "msg_contents": "On Thu, Dec 23, 2021 at 09:16:02PM +0530, Bharath Rupireddy wrote:\n> Thanks. I thought of having a macro, but it creates a lot of diff with\n> the previous versions as we have to change for other pg_control_XXX\n> functions.\n\nYeah, I was wondering about that, but that's not worth the potential\nconflict noise with the back-branches. Hence, fixed as suggested\nfirst upthread. Thanks!\n--\nMichael", "msg_date": "Sun, 26 Dec 2021 17:42:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: correct the sizes of values and nulls arrays in\n pg_control_checkpoint" } ]
[ { "msg_contents": "Hi,\n\npg_archivecleanup currently takes a WAL file name as input to delete\nthe WAL files prior to it [1]. As suggested by Satya (cc-ed) in\npg_replslotdata thread [2], can we enhance the pg_archivecleanup to\nautomatically detect the last checkpoint (from control file) LSN,\ncalculate the lowest restart_lsn required by the replication slots, if\nany (by reading the replication slot info from pg_logical directory),\narchive the unneeded (an archive_command similar to that of the one\nprovided in the server config can be provided as an input) WAL files\nbefore finally deleting them? Making pg_archivecleanup tool as an\nend-to-end solution will help greatly in disk full situations because\nof WAL files growth (inactive replication slots, archive command\nfailures, infrequent checkpoint etc.).\n\nThoughts?\n\n[1] - When used as a standalone program all WAL files logically\npreceding the oldestkeptwalfile will be removed from archivelocation.\nhttps://www.postgresql.org/docs/devel/pgarchivecleanup.html\n[2] - https://www.postgresql.org/message-id/CAHg%2BQDc9xwN7EmuONT3T91pCqFG6Q-BCe6B-kM-by7r1uPEicg%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 23 Dec 2021 18:28:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_archivecleanup - add the ability to detect, archive and delete the\n unneeded wal files on the primary" }, { "msg_contents": "On Thu, Dec 23, 2021, at 9:58 AM, Bharath Rupireddy wrote:\n> pg_archivecleanup currently takes a WAL file name as input to delete\n> the WAL files prior to it [1]. As suggested by Satya (cc-ed) in\n> pg_replslotdata thread [2], can we enhance the pg_archivecleanup to\n> automatically detect the last checkpoint (from control file) LSN,\n> calculate the lowest restart_lsn required by the replication slots, if\n> any (by reading the replication slot info from pg_logical directory),\n> archive the unneeded (an archive_command similar to that of the one\n> provided in the server config can be provided as an input) WAL files\n> before finally deleting them? Making pg_archivecleanup tool as an\n> end-to-end solution will help greatly in disk full situations because\n> of WAL files growth (inactive replication slots, archive command\n> failures, infrequent checkpoint etc.).\npg_archivecleanup is a tool to remove WAL files from the *archive*. Are you\nsuggesting to use it for removing files from pg_wal directory too? No, thanks.\nWAL files are a key component for backup and replication. Hence, you cannot\ndeliberately allow a tool to remove WAL files from PGDATA. IMO this issue\nwouldn't occur if you have a monitoring system and alerts and someone to keep\nan eye on it. If the disk full situation was caused by a failed archive command\nor a disconnected standby, it is easy to figure out; the fix is simple.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Dec 23, 2021, at 9:58 AM, Bharath Rupireddy wrote:pg_archivecleanup currently takes a WAL file name as input to deletethe WAL files prior to it [1]. As suggested by Satya (cc-ed) inpg_replslotdata thread [2], can we enhance the pg_archivecleanup toautomatically detect the last checkpoint (from control file) LSN,calculate the lowest restart_lsn required by the replication slots, ifany (by reading the replication slot info from pg_logical directory),archive the unneeded (an archive_command similar to that of the oneprovided in the server config can be provided as an input) WAL filesbefore finally deleting them? Making pg_archivecleanup tool as anend-to-end solution will help greatly in disk full situations becauseof WAL files growth (inactive replication slots, archive commandfailures, infrequent checkpoint etc.).pg_archivecleanup is a tool to remove WAL files from the *archive*. Are yousuggesting to use it for removing files from pg_wal directory too? No, thanks.WAL files are a key component for backup and replication. Hence, you cannotdeliberately allow a tool to remove WAL files from PGDATA. IMO this issuewouldn't occur if you have a monitoring system and alerts and someone to keepan eye on it. If the disk full situation was caused by a failed archive commandor a disconnected standby, it is easy to figure out; the fix is simple.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 23 Dec 2021 11:52:42 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: pg_archivecleanup - add the ability to detect,\n archive and delete the\n unneeded wal files on the primary" }, { "msg_contents": "Greetings,\n\n* Euler Taveira (euler@eulerto.com) wrote:\n> On Thu, Dec 23, 2021, at 9:58 AM, Bharath Rupireddy wrote:\n> > pg_archivecleanup currently takes a WAL file name as input to delete\n> > the WAL files prior to it [1]. As suggested by Satya (cc-ed) in\n> > pg_replslotdata thread [2], can we enhance the pg_archivecleanup to\n> > automatically detect the last checkpoint (from control file) LSN,\n> > calculate the lowest restart_lsn required by the replication slots, if\n> > any (by reading the replication slot info from pg_logical directory),\n> > archive the unneeded (an archive_command similar to that of the one\n> > provided in the server config can be provided as an input) WAL files\n> > before finally deleting them? Making pg_archivecleanup tool as an\n> > end-to-end solution will help greatly in disk full situations because\n> > of WAL files growth (inactive replication slots, archive command\n> > failures, infrequent checkpoint etc.).\n\nThe overall idea of having a tool for this isn't a bad idea, but ..\n\n> pg_archivecleanup is a tool to remove WAL files from the *archive*. Are you\n> suggesting to use it for removing files from pg_wal directory too? No, thanks.\n\nWe definitely shouldn't have it be part of pg_archivecleanup for the\nsimple reason that it'll be really confusing and almost certainly will\nbe mis-used. For my 2c, we should just remove pg_archivecleanup\nentirely.\n\n> WAL files are a key component for backup and replication. Hence, you cannot\n> deliberately allow a tool to remove WAL files from PGDATA. IMO this issue\n> wouldn't occur if you have a monitoring system and alerts and someone to keep\n> an eye on it. If the disk full situation was caused by a failed archive command\n> or a disconnected standby, it is easy to figure out; the fix is simple.\n\nThis is perhaps a bit far- PG does, in fact, remove WAL files from\nPGDATA. Having a tool which will do this safely when the server isn't\nable to be brought online due to lack of disk space would certainly be\nhelpful rather frequently. I agree that monitoring and alerting are\nthings that everyone should implement and pay attention to, but that\ndoesn't happen and instead people end up just blowing away pg_wal and\ncorrupting their database when, had a tool existed, they could have\navoided that happening and brought the system back online in relatively\nshort order without any data loss.\n\nThanks,\n\nStephen", "msg_date": "Wed, 29 Dec 2021 08:57:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_archivecleanup - add the ability to detect, archive and\n delete the unneeded wal files on the primary" }, { "msg_contents": "On Wed, Dec 29, 2021 at 7:27 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > On Thu, Dec 23, 2021, at 9:58 AM, Bharath Rupireddy wrote:\n> > > pg_archivecleanup currently takes a WAL file name as input to delete\n> > > the WAL files prior to it [1]. As suggested by Satya (cc-ed) in\n> > > pg_replslotdata thread [2], can we enhance the pg_archivecleanup to\n> > > automatically detect the last checkpoint (from control file) LSN,\n> > > calculate the lowest restart_lsn required by the replication slots, if\n> > > any (by reading the replication slot info from pg_logical directory),\n> > > archive the unneeded (an archive_command similar to that of the one\n> > > provided in the server config can be provided as an input) WAL files\n> > > before finally deleting them? Making pg_archivecleanup tool as an\n> > > end-to-end solution will help greatly in disk full situations because\n> > > of WAL files growth (inactive replication slots, archive command\n> > > failures, infrequent checkpoint etc.).\n>\n> The overall idea of having a tool for this isn't a bad idea, but ..\n>\n> > pg_archivecleanup is a tool to remove WAL files from the *archive*. Are you\n> > suggesting to use it for removing files from pg_wal directory too? No, thanks.\n>\n> We definitely shouldn't have it be part of pg_archivecleanup for the\n> simple reason that it'll be really confusing and almost certainly will\n> be mis-used.\n\n+1\n\n> > WAL files are a key component for backup and replication. Hence, you cannot\n> > deliberately allow a tool to remove WAL files from PGDATA. IMO this issue\n> > wouldn't occur if you have a monitoring system and alerts and someone to keep\n> > an eye on it. If the disk full situation was caused by a failed archive command\n> > or a disconnected standby, it is easy to figure out; the fix is simple.\n>\n> This is perhaps a bit far- PG does, in fact, remove WAL files from\n> PGDATA. Having a tool which will do this safely when the server isn't\n> able to be brought online due to lack of disk space would certainly be\n> helpful rather frequently. I agree that monitoring and alerting are\n> things that everyone should implement and pay attention to, but that\n> doesn't happen and instead people end up just blowing away pg_wal and\n> corrupting their database when, had a tool existed, they could have\n> avoided that happening and brought the system back online in relatively\n> short order without any data loss.\n\nThanks. Yes, the end-to-end tool is helpful in rather eventual\nsituations and having it in the core is more helpful instead of every\npostgres vendor developing their own solution and many times it's hard\nto get it right. Also, I agree to not club this idea with\npg_archviecleanup. How about having a new tool like\npg_walcleanup/pg_xlogcleanup helping the developers/admins/users in\neventual situations?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 29 Dec 2021 20:06:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_archivecleanup - add the ability to detect, archive and delete\n the unneeded wal files on the primary" }, { "msg_contents": "On Wed, Dec 29, 2021 at 8:06 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 29, 2021 at 7:27 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > On Thu, Dec 23, 2021, at 9:58 AM, Bharath Rupireddy wrote:\n> > > > pg_archivecleanup currently takes a WAL file name as input to delete\n> > > > the WAL files prior to it [1]. As suggested by Satya (cc-ed) in\n> > > > pg_replslotdata thread [2], can we enhance the pg_archivecleanup to\n> > > > automatically detect the last checkpoint (from control file) LSN,\n> > > > calculate the lowest restart_lsn required by the replication slots, if\n> > > > any (by reading the replication slot info from pg_logical directory),\n> > > > archive the unneeded (an archive_command similar to that of the one\n> > > > provided in the server config can be provided as an input) WAL files\n> > > > before finally deleting them? Making pg_archivecleanup tool as an\n> > > > end-to-end solution will help greatly in disk full situations because\n> > > > of WAL files growth (inactive replication slots, archive command\n> > > > failures, infrequent checkpoint etc.).\n> >\n> > The overall idea of having a tool for this isn't a bad idea, but ..\n> >\n> > > pg_archivecleanup is a tool to remove WAL files from the *archive*. Are you\n> > > suggesting to use it for removing files from pg_wal directory too? No, thanks.\n> >\n> > We definitely shouldn't have it be part of pg_archivecleanup for the\n> > simple reason that it'll be really confusing and almost certainly will\n> > be mis-used.\n>\n> +1\n>\n> > > WAL files are a key component for backup and replication. Hence, you cannot\n> > > deliberately allow a tool to remove WAL files from PGDATA. IMO this issue\n> > > wouldn't occur if you have a monitoring system and alerts and someone to keep\n> > > an eye on it. If the disk full situation was caused by a failed archive command\n> > > or a disconnected standby, it is easy to figure out; the fix is simple.\n> >\n> > This is perhaps a bit far- PG does, in fact, remove WAL files from\n> > PGDATA. Having a tool which will do this safely when the server isn't\n> > able to be brought online due to lack of disk space would certainly be\n> > helpful rather frequently. I agree that monitoring and alerting are\n> > things that everyone should implement and pay attention to, but that\n> > doesn't happen and instead people end up just blowing away pg_wal and\n> > corrupting their database when, had a tool existed, they could have\n> > avoided that happening and brought the system back online in relatively\n> > short order without any data loss.\n>\n> Thanks. Yes, the end-to-end tool is helpful in rather eventual\n> situations and having it in the core is more helpful instead of every\n> postgres vendor developing their own solution and many times it's hard\n> to get it right. Also, I agree to not club this idea with\n> pg_archviecleanup. How about having a new tool like\n> pg_walcleanup/pg_xlogcleanup helping the developers/admins/users in\n> eventual situations?\n\nThanks for the comments. Here's a new tool called pg_walcleaner which\nbasically deletes (optionally archiving before deletion) the unneeded\nWAL files.\n\nPlease provide your thoughts and review the patches.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 18 Apr 2022 09:59:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_walcleaner - new tool to detect, archive and delete the unneeded\n wal files (was Re: pg_archivecleanup - add the ability to detect, archive and\n delete the unneeded wal files on the primary)" }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> Thanks for the comments. Here's a new tool called pg_walcleaner which\n> basically deletes (optionally archiving before deletion) the unneeded\n> WAL files.\n> \n> Please provide your thoughts and review the patches.\n\nAlright, I spent some more time thinking about this and contemplating\nwhat the next steps are... and I feel like the next step is basically\n\"add a HINT when the server can't start due to being out of disk space\nthat one should consider running pg_walcleaner\" and at that point... why\naren't we just, uh, doing that? This is all still quite hand-wavy, but\nit sure would be nice to be able to avoid downtime due to a broken\narchiving setup. pgbackrest has a way of doing this and while we, of\ncourse, discourage the use of that option, as it means throwing away\nWAL, it's an option that users have. PG could have a similar option.\nBasically, to archive_command/library what max_slot_wal_keep_size is for\nslots.\n\nThat isn't to say that we shouldn't also have a tool like this, but it\ngenerally feels like we're taking a reactive approach here rather than a\nproactive one to addressing the root issue.\n\nThanks,\n\nStephen", "msg_date": "Mon, 18 Apr 2022 10:11:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > Thanks for the comments. Here's a new tool called pg_walcleaner which\n> > basically deletes (optionally archiving before deletion) the unneeded\n> > WAL files.\n> >\n> > Please provide your thoughts and review the patches.\n>\n> Alright, I spent some more time thinking about this and contemplating\n> what the next steps are... and I feel like the next step is basically\n> \"add a HINT when the server can't start due to being out of disk space\n> that one should consider running pg_walcleaner\" and at that point... why\n> aren't we just, uh, doing that? This is all still quite hand-wavy, but\n> it sure would be nice to be able to avoid downtime due to a broken\n> archiving setup. pgbackrest has a way of doing this and while we, of\n> course, discourage the use of that option, as it means throwing away\n> WAL, it's an option that users have. PG could have a similar option.\n> Basically, to archive_command/library what max_slot_wal_keep_size is for\n> slots.\n\nThanks. I get your point. The way I see it is that the postgres should\nbe self-aware of the about-to-get-full disk (probably when the data\ndirectory size is 90%(configurable, of course) of total disk size) and\nthen freeze the new write operations (may be via new ALTER SYSTEM SET\nREAD-ONLY or setting default_transaction_read_only GUC) and then go\nclean the unneeded WAL files by just invoking pg_walcleaner tool\nperhaps. I think, so far, this kind of work has been done outside of\npostgres. Even then, we might get into out-of-disk situations\ndepending on how frequently we check the data directory size to\ncompute the 90% configurable limit. Detecting the disk size is the KEY\nhere. Hence we need an offline invokable tool like pg_walcleaner.\n\n Actually, I was planning to write an extension with a background\nworker doing this for us.\n\n> That isn't to say that we shouldn't also have a tool like this, but it\n> generally feels like we're taking a reactive approach here rather than a\n> proactive one to addressing the root issue.\n\nAgree. The offline tool like pg_walcleaner can help greatly even with\nsome sort of above internal/external disk space monitoring tools.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 18 Apr 2022 20:35:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "Greeting,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Mon, Apr 18, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > > Thanks for the comments. Here's a new tool called pg_walcleaner which\n> > > basically deletes (optionally archiving before deletion) the unneeded\n> > > WAL files.\n> > >\n> > > Please provide your thoughts and review the patches.\n> >\n> > Alright, I spent some more time thinking about this and contemplating\n> > what the next steps are... and I feel like the next step is basically\n> > \"add a HINT when the server can't start due to being out of disk space\n> > that one should consider running pg_walcleaner\" and at that point... why\n> > aren't we just, uh, doing that? This is all still quite hand-wavy, but\n> > it sure would be nice to be able to avoid downtime due to a broken\n> > archiving setup. pgbackrest has a way of doing this and while we, of\n> > course, discourage the use of that option, as it means throwing away\n> > WAL, it's an option that users have. PG could have a similar option.\n> > Basically, to archive_command/library what max_slot_wal_keep_size is for\n> > slots.\n> \n> Thanks. I get your point. The way I see it is that the postgres should\n> be self-aware of the about-to-get-full disk (probably when the data\n> directory size is 90%(configurable, of course) of total disk size) and\n> then freeze the new write operations (may be via new ALTER SYSTEM SET\n> READ-ONLY or setting default_transaction_read_only GUC) and then go\n> clean the unneeded WAL files by just invoking pg_walcleaner tool\n> perhaps. I think, so far, this kind of work has been done outside of\n> postgres. Even then, we might get into out-of-disk situations\n> depending on how frequently we check the data directory size to\n> compute the 90% configurable limit. Detecting the disk size is the KEY\n> here. Hence we need an offline invokable tool like pg_walcleaner.\n\nUgh, last I checked, figuring out if a given filesystem is near being\nfull is a pain to do in a cross-platform way. Why not just do exactly\nwhat we already are doing for replication slots, but for\narchive_command? Then we wouldn't need to go into a read-only mode.\nPerhaps going into a read-only mode would be an alternative option to\nthat but we should definitely be letting the admin pick what to do in\nsuch a case. The idea of going read-only and then *also* removing WAL\nfiles doesn't seem like it's ever the right choice though..?\n\nAs for worrying about frequency.. that seems unlikely to be that serious\nof an issue, if we just check how far behind we are with each time we\ntry to run archive_command. That's basically how the pgbackrest option\nworks and we've had few issues with it not being 'soon enough'.\n\n> Actually, I was planning to write an extension with a background\n> worker doing this for us.\n\n... but we have a background worker already for archiving that could\nhandle this for us, doesn't seem like we need another.\n\n> > That isn't to say that we shouldn't also have a tool like this, but it\n> > generally feels like we're taking a reactive approach here rather than a\n> > proactive one to addressing the root issue.\n> \n> Agree. The offline tool like pg_walcleaner can help greatly even with\n> some sort of above internal/external disk space monitoring tools.\n\nSee, this seems like a really bad idea to me. I'd be very concerned\nabout people mis-using this tool in some way and automating its usage\nstrikes me as absolutely exactly that.. Are we sure that we can\nguarantee that we don't remove things we shouldn't when this ends up\ngetting run against a running cluster from someone's automated tooling?\nOr when someone sees that it refuses to run for $reason and tries to..\n\"fix\" that? Seems quite risky to me.. I'd probably want to put similar\ncaveats around using this tool as I do around pg_resetwal when doing\ntraining- that is, don't ever, ever, ever use this, heh.\n\nThanks,\n\nStephen", "msg_date": "Mon, 18 Apr 2022 11:18:00 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 8:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greeting,\n>\n> * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > On Mon, Apr 18, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > > > Thanks for the comments. Here's a new tool called pg_walcleaner which\n> > > > basically deletes (optionally archiving before deletion) the unneeded\n> > > > WAL files.\n> > > >\n> > > > Please provide your thoughts and review the patches.\n> > >\n> > > Alright, I spent some more time thinking about this and contemplating\n> > > what the next steps are... and I feel like the next step is basically\n> > > \"add a HINT when the server can't start due to being out of disk space\n> > > that one should consider running pg_walcleaner\" and at that point... why\n> > > aren't we just, uh, doing that? This is all still quite hand-wavy, but\n> > > it sure would be nice to be able to avoid downtime due to a broken\n> > > archiving setup. pgbackrest has a way of doing this and while we, of\n> > > course, discourage the use of that option, as it means throwing away\n> > > WAL, it's an option that users have. PG could have a similar option.\n> > > Basically, to archive_command/library what max_slot_wal_keep_size is for\n> > > slots.\n> >\n> > Thanks. I get your point. The way I see it is that the postgres should\n> > be self-aware of the about-to-get-full disk (probably when the data\n> > directory size is 90%(configurable, of course) of total disk size) and\n> > then freeze the new write operations (may be via new ALTER SYSTEM SET\n> > READ-ONLY or setting default_transaction_read_only GUC) and then go\n> > clean the unneeded WAL files by just invoking pg_walcleaner tool\n> > perhaps. I think, so far, this kind of work has been done outside of\n> > postgres. Even then, we might get into out-of-disk situations\n> > depending on how frequently we check the data directory size to\n> > compute the 90% configurable limit. Detecting the disk size is the KEY\n> > here. Hence we need an offline invokable tool like pg_walcleaner.\n>\n> Ugh, last I checked, figuring out if a given filesystem is near being\n> full is a pain to do in a cross-platform way. Why not just do exactly\n> what we already are doing for replication slots, but for\n> archive_command?\n\nDo you mean to say that if the archvie_command fails, say, for \"some\ntime\" or \"some number of attempts\", just let the server not bother\nabout it and checkpoint delete the WAL files instead of going out of\ndisk? If this is the thought, then it's more dangerous as we might end\nup losing the WAL forever. For invalidating replication slots, it's\nokay because the required WAL can exist somewhere (either on the\nprimary or on the archive location).\n\n> > > That isn't to say that we shouldn't also have a tool like this, but it\n> > > generally feels like we're taking a reactive approach here rather than a\n> > > proactive one to addressing the root issue.\n> >\n> > Agree. The offline tool like pg_walcleaner can help greatly even with\n> > some sort of above internal/external disk space monitoring tools.\n>\n> See, this seems like a really bad idea to me. I'd be very concerned\n> about people mis-using this tool in some way and automating its usage\n> strikes me as absolutely exactly that.. Are we sure that we can\n> guarantee that we don't remove things we shouldn't when this ends up\n> getting run against a running cluster from someone's automated tooling?\n> Or when someone sees that it refuses to run for $reason and tries to..\n> \"fix\" that? Seems quite risky to me.. I'd probably want to put similar\n> caveats around using this tool as I do around pg_resetwal when doing\n> training- that is, don't ever, ever, ever use this, heh.\n\nThe initial version of the patch doesn't check if the server crashed\nor not before running it. I was thinking of looking at the\npostmaster.pid or pg_control file (however they don't guarantee\nwhether the server is up or crashed because the server can crash\nwithout deleting postmaster.pid or updating pg_control file). Another\nidea is to let pg_walcleaner fire a sample query ('SELECT 1') to see\nif the server is up and running, if yes, exit, otherwise proceed with\nits work.\n\nAlso, to not cause losing of WAL permanently, we must recommend using\narchvie_command so that the WAL can be moved to an alternative\nlocation (could be the same archvie_location that primary uses).\n\nAnd yes, we must have clear usage guidelines in the docs.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 22 Apr 2022 19:07:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Mon, Apr 18, 2022 at 8:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > > On Mon, Apr 18, 2022 at 7:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > > > > Thanks for the comments. Here's a new tool called pg_walcleaner which\n> > > > > basically deletes (optionally archiving before deletion) the unneeded\n> > > > > WAL files.\n> > > > >\n> > > > > Please provide your thoughts and review the patches.\n> > > >\n> > > > Alright, I spent some more time thinking about this and contemplating\n> > > > what the next steps are... and I feel like the next step is basically\n> > > > \"add a HINT when the server can't start due to being out of disk space\n> > > > that one should consider running pg_walcleaner\" and at that point... why\n> > > > aren't we just, uh, doing that? This is all still quite hand-wavy, but\n> > > > it sure would be nice to be able to avoid downtime due to a broken\n> > > > archiving setup. pgbackrest has a way of doing this and while we, of\n> > > > course, discourage the use of that option, as it means throwing away\n> > > > WAL, it's an option that users have. PG could have a similar option.\n> > > > Basically, to archive_command/library what max_slot_wal_keep_size is for\n> > > > slots.\n> > >\n> > > Thanks. I get your point. The way I see it is that the postgres should\n> > > be self-aware of the about-to-get-full disk (probably when the data\n> > > directory size is 90%(configurable, of course) of total disk size) and\n> > > then freeze the new write operations (may be via new ALTER SYSTEM SET\n> > > READ-ONLY or setting default_transaction_read_only GUC) and then go\n> > > clean the unneeded WAL files by just invoking pg_walcleaner tool\n> > > perhaps. I think, so far, this kind of work has been done outside of\n> > > postgres. Even then, we might get into out-of-disk situations\n> > > depending on how frequently we check the data directory size to\n> > > compute the 90% configurable limit. Detecting the disk size is the KEY\n> > > here. Hence we need an offline invokable tool like pg_walcleaner.\n> >\n> > Ugh, last I checked, figuring out if a given filesystem is near being\n> > full is a pain to do in a cross-platform way. Why not just do exactly\n> > what we already are doing for replication slots, but for\n> > archive_command?\n> \n> Do you mean to say that if the archvie_command fails, say, for \"some\n> time\" or \"some number of attempts\", just let the server not bother\n> about it and checkpoint delete the WAL files instead of going out of\n> disk? If this is the thought, then it's more dangerous as we might end\n> up losing the WAL forever. For invalidating replication slots, it's\n> okay because the required WAL can exist somewhere (either on the\n> primary or on the archive location).\n\nI was thinking more specifically along the lines of \"if there's > X GB\nof WAL that hasn't been archived, give up on archiving anything new\"\n(which is how the pgbackrest option works).\n\nAs archiving with this command is optional, it does present the same\nrisk too. Perhaps if we flipped it around to require the\narchive_command be provided then it'd be a bit better, though we would\nalso need a way for users to ask for us to just delete the WAL without\narchiving it. There again though ... the server already has a way of\nboth archiving and removing archived WAL and also has now grown the\narchive_library option, something that this tool would be pretty hard to\nreplicate, I feel like, as it wouldn't be loading the library into a PG\nbackend anymore. As we don't have any real archive libraries yet, it's\nhard to say if that's going to be an actual issue or not. Something to\nconsider though.\n\n> > > > That isn't to say that we shouldn't also have a tool like this, but it\n> > > > generally feels like we're taking a reactive approach here rather than a\n> > > > proactive one to addressing the root issue.\n> > >\n> > > Agree. The offline tool like pg_walcleaner can help greatly even with\n> > > some sort of above internal/external disk space monitoring tools.\n> >\n> > See, this seems like a really bad idea to me. I'd be very concerned\n> > about people mis-using this tool in some way and automating its usage\n> > strikes me as absolutely exactly that.. Are we sure that we can\n> > guarantee that we don't remove things we shouldn't when this ends up\n> > getting run against a running cluster from someone's automated tooling?\n> > Or when someone sees that it refuses to run for $reason and tries to..\n> > \"fix\" that? Seems quite risky to me.. I'd probably want to put similar\n> > caveats around using this tool as I do around pg_resetwal when doing\n> > training- that is, don't ever, ever, ever use this, heh.\n> \n> The initial version of the patch doesn't check if the server crashed\n> or not before running it. I was thinking of looking at the\n> postmaster.pid or pg_control file (however they don't guarantee\n> whether the server is up or crashed because the server can crash\n> without deleting postmaster.pid or updating pg_control file). Another\n> idea is to let pg_walcleaner fire a sample query ('SELECT 1') to see\n> if the server is up and running, if yes, exit, otherwise proceed with\n> its work.\n\nAll of which isn't an issue if we don't have an external tool trying to\ndo this and instead have the server doing it as the server knows its\ninternal status, that the archive command has been failing long enough\nto pass the configuration threshold, and that the WAL isn't needed for\ncrash recovery. The ability to avoid having to crash and go through\nthat process is pretty valuable. Still, a crash may still happen and\nit'd be useful to have a clean way to deal with it. I'm not really a\nfan of having to essentially configure this external command as well as\nhave the server configured. Have we settled that there's no way to make\nthe server archive while there's no space available and before trying to\nwrite out more data?\n\n> Also, to not cause losing of WAL permanently, we must recommend using\n> archvie_command so that the WAL can be moved to an alternative\n> location (could be the same archvie_location that primary uses).\n\nI agree we should recommend using archive_command or archive_library, of\ncourse, but if that's been done and is working properly then this tool\nisn't really needed. The use-case we're trying to address, I believe,\nis something like:\n\n1) archive command starts failing for some reason\n2) WAL piles up on the primary\n3) primary runs out of disk space, crash happens\n4) archive command gets 'fixed' in some fashion\n5) WAL is archived and removed from primary\n6) primary is restarted and able to go through crash recovery\n7) server is online again\n\nNow, I was trying to suggest an approach to addressing the issue at #2,\nthat is, avoid having WAL pile up without end on the primary and avoid\nthe crash in the first place. For users who care more about uptime and\nless about WAL, that's likely what they want.\n\nFor users who care more about WAL than uptime, it'd be good to have a\nway to help them too, but to do that, #4 has to happen and, once that's\ndone, #5 and following just need to be accomplished in whatever way is\nsimplest. The thought I'm having here is that the simplest answer, at\nleast from the user's perspective, is that the server is able to just be\nbrought up with the fixed archive command and everything just works-\narchiving happens, space is free'd up, and the server comes up and\ncontinues running.\n\nI accept that it isn't this patch's job or mandate to go implement some\nnew option that I've thought up, and I could be convinced that this\nseparate tool is just how we're going to have to get #5 accomplished for\nnow due to the complexity of making the server do the archiving early on\nand/or that it has other downsides (if the crash wasn't due to running\nout of space, making the server wait to come online until after the WAL\nhas been archived wouldn't be ideal) that make it a poor choice overall,\nbut it seems like it's something that's at least worth some thought and\nconsideration of if there's a way to accomplish this with a bit less\nmanual user involvement, as that tends to be error-prone.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Apr 2022 14:39:57 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On Tue, Apr 26, 2022 at 12:09 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> I was thinking more specifically along the lines of \"if there's > X GB\n> of WAL that hasn't been archived, give up on archiving anything new\"\n> (which is how the pgbackrest option works).\n\nIMO, this option is dangerous because the users might lose the\nunarchived WAL files completely. Instead, fixing the archive_command\neither before the server runs out of disk space (if someone/some\nmonitoring detected the archive_command failure) or after the server\ncrashes with no space.\n\n> As archiving with this command is optional, it does present the same\n> risk too. Perhaps if we flipped it around to require the\n> archive_command be provided then it'd be a bit better, though we would\n> also need a way for users to ask for us to just delete the WAL without\n> archiving it. There again though ... the server already has a way of\n> both archiving and removing archived WAL and also has now grown the\n> archive_library option, something that this tool would be pretty hard to\n> replicate, I feel like, as it wouldn't be loading the library into a PG\n> backend anymore. As we don't have any real archive libraries yet, it's\n> hard to say if that's going to be an actual issue or not. Something to\n> consider though.\n\nActually, why are we just thinking that the WAL files grow up only\nbecause of archive command failures (of course, it's the main cause\neven in a well-configured server)?. Imagine if the checkpoints weren't\nhappening frequently for whatever reasons or the\nmax_slot_wal_keep_size or wal_keep_size weren't set appropriately\n(even if they set properly, someone, could be an attacker, can reset\nthem), high write activity generating huge WAL files (especially on\nsmaller servers) - these too can be reasons for server going down\nbecause of WAL files growth.\n\nThe main motto of this tool is to use (by DBAs or service engineers)\nit as a last resort option after the server goes out of disk space to\nquickly bring the server back online so that the regular cleaning up\nactivity can take place or the storage scale operations can be\nperformed if required. There's a dry run option in pg_walcleaner to\nsee if it can free up some space at all. If required we can provide an\noption to archive and delete only the maximum of specified number of\nWAL files.\n\nI'm okay with making the archive_command mandatory and if archiving\nisn't required to be used with the pg_walcleaner tool they can set\nsome dummy values such as archive_command='/bin/true'.\n\n> > The initial version of the patch doesn't check if the server crashed\n> > or not before running it. I was thinking of looking at the\n> > postmaster.pid or pg_control file (however they don't guarantee\n> > whether the server is up or crashed because the server can crash\n> > without deleting postmaster.pid or updating pg_control file). Another\n> > idea is to let pg_walcleaner fire a sample query ('SELECT 1') to see\n> > if the server is up and running, if yes, exit, otherwise proceed with\n> > its work.\n>\n> All of which isn't an issue if we don't have an external tool trying to\n> do this and instead have the server doing it as the server knows its\n> internal status, that the archive command has been failing long enough\n> to pass the configuration threshold, and that the WAL isn't needed for\n> crash recovery. The ability to avoid having to crash and go through\n> that process is pretty valuable. Still, a crash may still happen and\n> it'd be useful to have a clean way to deal with it. I'm not really a\n> fan of having to essentially configure this external command as well as\n> have the server configured. Have we settled that there's no way to make\n> the server archive while there's no space available and before trying to\n> write out more data?\n\nThe pg_walcleaner tool isn't intrusive in the sense that it doesn't\ndelete the WAL files that are required for the server to come up (as\nit checks for the checkpoint redo WAL file), apart from this it has\narchive_command too so no loss of the WAL file(s) at all unlike the\npgbackrest option.\n\n> > Also, to not cause losing of WAL permanently, we must recommend using\n> > archvie_command so that the WAL can be moved to an alternative\n> > location (could be the same archvie_location that primary uses).\n>\n> I agree we should recommend using archive_command or archive_library, of\n> course, but if that's been done and is working properly then this tool\n> isn't really needed. The use-case we're trying to address, I believe,\n> is something like:\n>\n> 1) archive command starts failing for some reason\n> 2) WAL piles up on the primary\n> 3) primary runs out of disk space, crash happens\n> 4) archive command gets 'fixed' in some fashion\n> 5) WAL is archived and removed from primary\n> 6) primary is restarted and able to go through crash recovery\n> 7) server is online again\n>\n> Now, I was trying to suggest an approach to addressing the issue at #2,\n> that is, avoid having WAL pile up without end on the primary and avoid\n> the crash in the first place. For users who care more about uptime and\n> less about WAL, that's likely what they want.\n>\n> For users who care more about WAL than uptime, it'd be good to have a\n> way to help them too, but to do that, #4 has to happen and, once that's\n> done, #5 and following just need to be accomplished in whatever way is\n> simplest. The thought I'm having here is that the simplest answer, at\n> least from the user's perspective, is that the server is able to just be\n> brought up with the fixed archive command and everything just works-\n> archiving happens, space is free'd up, and the server comes up and\n> continues running.\n>\n> I accept that it isn't this patch's job or mandate to go implement some\n> new option that I've thought up, and I could be convinced that this\n> separate tool is just how we're going to have to get #5 accomplished for\n> now due to the complexity of making the server do the archiving early on\n> and/or that it has other downsides (if the crash wasn't due to running\n> out of space, making the server wait to come online until after the WAL\n> has been archived wouldn't be ideal) that make it a poor choice overall,\n> but it seems like it's something that's at least worth some thought and\n> consideration of if there's a way to accomplish this with a bit less\n> manual user involvement, as that tends to be error-prone.\n\nAs I said above, let's not just think that the archive_command\nfailures are the only reasons (of course they are the main causes) for\nWAL file growth.\n\nThe pg_walcleaner proposed here is a better option IMO for the\ndevelopers/DBAs/service engineers to see if they can quickly bring up\nthe crashed server avoiding some manual work of looking at which WAL\nfiles can be deleted as such. Also if they fixed the failed\narchive_command or wrong settings that caused the WAL files growth,\nthe server will be able to do the other cleanup and come online\ncleanly. I'm sure many would have faced this problem of server crashes\ndue to out of disk space at some point in time in production\nenvironments, especially with the older versions of postgres.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 29 Apr 2022 17:38:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Tue, Apr 26, 2022 at 12:09 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I was thinking more specifically along the lines of \"if there's > X GB\n> > of WAL that hasn't been archived, give up on archiving anything new\"\n> > (which is how the pgbackrest option works).\n> \n> IMO, this option is dangerous because the users might lose the\n> unarchived WAL files completely. Instead, fixing the archive_command\n> either before the server runs out of disk space (if someone/some\n> monitoring detected the archive_command failure) or after the server\n> crashes with no space.\n\nCertainly, fixing the archive_command would be the best answer. We're\nhere because we know that doesn't always happen and are looking for\nother ways to avoid the server crashing or to at least help clean things\nup if a crash does happen.\n\n> > As archiving with this command is optional, it does present the same\n> > risk too. Perhaps if we flipped it around to require the\n> > archive_command be provided then it'd be a bit better, though we would\n> > also need a way for users to ask for us to just delete the WAL without\n> > archiving it. There again though ... the server already has a way of\n> > both archiving and removing archived WAL and also has now grown the\n> > archive_library option, something that this tool would be pretty hard to\n> > replicate, I feel like, as it wouldn't be loading the library into a PG\n> > backend anymore. As we don't have any real archive libraries yet, it's\n> > hard to say if that's going to be an actual issue or not. Something to\n> > consider though.\n> \n> Actually, why are we just thinking that the WAL files grow up only\n> because of archive command failures (of course, it's the main cause\n> even in a well-configured server)?. Imagine if the checkpoints weren't\n> happening frequently for whatever reasons or the\n> max_slot_wal_keep_size or wal_keep_size weren't set appropriately\n> (even if they set properly, someone, could be an attacker, can reset\n> them), high write activity generating huge WAL files (especially on\n> smaller servers) - these too can be reasons for server going down\n> because of WAL files growth.\n\nThe option that I outline would probably be the \"end\" of all of the\nabove- that is, its job is to absolutely keep the pg_wal filesystem from\nrunning out of space by making sure that pg_wal doesn't grow beyond the\nconfigured size. That might mean that we don't keep wal_keep_size\namount of WAL or maybe get rid of WAL before max_slot_wal_keep_size. If\ncheckpoints aren't happening, that's different and throwing away WAL\nwould mean corruption on the primary and so we'd just have to accept a\ncrash in that case but I don't think that's anywhere near as common as\nthe other cases. We would probably want to throw warnings if someone\nconfigured wal_keep_size bigger than throw_away_unneeded_wal or\nwhatever.\n\n> The main motto of this tool is to use (by DBAs or service engineers)\n> it as a last resort option after the server goes out of disk space to\n> quickly bring the server back online so that the regular cleaning up\n> activity can take place or the storage scale operations can be\n> performed if required. There's a dry run option in pg_walcleaner to\n> see if it can free up some space at all. If required we can provide an\n> option to archive and delete only the maximum of specified number of\n> WAL files.\n\nUnfortunately, it's not likely to be very quick if archive command is\nset to anything real. Interesting idea to have an option along the\nlines of \"please archive X amount of WAL\", but there again- the server\ncould, in theory anyway, start up and get a few WAL segments archived\nand then start doing replay from the crash, and if it ran out of space\nand archiving was still happening, it could perhaps just wait and try\nagain. With the ALTER SYSTEM READ ONLY, it could maybe even just go\nread-only if it ran out of space in the first place and continue to try\nand archive WAL. Kind of hard to say how much additional space is\nneeded to get through crash recovery and so this suggested option would\nalways end up having to be a pretty broad guess and you absolutely\ncouldn't start the server while this command is running (which I would\ndefinitely be worried would end up happening...), so that's also\nsomething to think about.\n\n> I'm okay with making the archive_command mandatory and if archiving\n> isn't required to be used with the pg_walcleaner tool they can set\n> some dummy values such as archive_command='/bin/true'.\n\nYeah, that seems at least a bit better. I imagine a lot of people will\nprobably use it exactly like that, but this will hopefully at least\ndiscourage them from just throwing it away.\n\n> > > The initial version of the patch doesn't check if the server crashed\n> > > or not before running it. I was thinking of looking at the\n> > > postmaster.pid or pg_control file (however they don't guarantee\n> > > whether the server is up or crashed because the server can crash\n> > > without deleting postmaster.pid or updating pg_control file). Another\n> > > idea is to let pg_walcleaner fire a sample query ('SELECT 1') to see\n> > > if the server is up and running, if yes, exit, otherwise proceed with\n> > > its work.\n> >\n> > All of which isn't an issue if we don't have an external tool trying to\n> > do this and instead have the server doing it as the server knows its\n> > internal status, that the archive command has been failing long enough\n> > to pass the configuration threshold, and that the WAL isn't needed for\n> > crash recovery. The ability to avoid having to crash and go through\n> > that process is pretty valuable. Still, a crash may still happen and\n> > it'd be useful to have a clean way to deal with it. I'm not really a\n> > fan of having to essentially configure this external command as well as\n> > have the server configured. Have we settled that there's no way to make\n> > the server archive while there's no space available and before trying to\n> > write out more data?\n> \n> The pg_walcleaner tool isn't intrusive in the sense that it doesn't\n> delete the WAL files that are required for the server to come up (as\n> it checks for the checkpoint redo WAL file), apart from this it has\n> archive_command too so no loss of the WAL file(s) at all unlike the\n> pgbackrest option.\n\nWon't be any WAL loss with pgbackrest unless it's specifically\nconfigured to throw it away- again, it's a tradeoff. Just suggesting\nthat we could have that be part of core as an option.\n\n> > > Also, to not cause losing of WAL permanently, we must recommend using\n> > > archvie_command so that the WAL can be moved to an alternative\n> > > location (could be the same archvie_location that primary uses).\n> >\n> > I agree we should recommend using archive_command or archive_library, of\n> > course, but if that's been done and is working properly then this tool\n> > isn't really needed. The use-case we're trying to address, I believe,\n> > is something like:\n> >\n> > 1) archive command starts failing for some reason\n> > 2) WAL piles up on the primary\n> > 3) primary runs out of disk space, crash happens\n> > 4) archive command gets 'fixed' in some fashion\n> > 5) WAL is archived and removed from primary\n> > 6) primary is restarted and able to go through crash recovery\n> > 7) server is online again\n> >\n> > Now, I was trying to suggest an approach to addressing the issue at #2,\n> > that is, avoid having WAL pile up without end on the primary and avoid\n> > the crash in the first place. For users who care more about uptime and\n> > less about WAL, that's likely what they want.\n> >\n> > For users who care more about WAL than uptime, it'd be good to have a\n> > way to help them too, but to do that, #4 has to happen and, once that's\n> > done, #5 and following just need to be accomplished in whatever way is\n> > simplest. The thought I'm having here is that the simplest answer, at\n> > least from the user's perspective, is that the server is able to just be\n> > brought up with the fixed archive command and everything just works-\n> > archiving happens, space is free'd up, and the server comes up and\n> > continues running.\n> >\n> > I accept that it isn't this patch's job or mandate to go implement some\n> > new option that I've thought up, and I could be convinced that this\n> > separate tool is just how we're going to have to get #5 accomplished for\n> > now due to the complexity of making the server do the archiving early on\n> > and/or that it has other downsides (if the crash wasn't due to running\n> > out of space, making the server wait to come online until after the WAL\n> > has been archived wouldn't be ideal) that make it a poor choice overall,\n> > but it seems like it's something that's at least worth some thought and\n> > consideration of if there's a way to accomplish this with a bit less\n> > manual user involvement, as that tends to be error-prone.\n> \n> As I said above, let's not just think that the archive_command\n> failures are the only reasons (of course they are the main causes) for\n> WAL file growth.\n\nI agree that we should consider other reasons and how the suggested\noption might interact with the other existing options we have. This\ndoesn't really address the point that I was making though.\n\n> The pg_walcleaner proposed here is a better option IMO for the\n> developers/DBAs/service engineers to see if they can quickly bring up\n> the crashed server avoiding some manual work of looking at which WAL\n> files can be deleted as such. Also if they fixed the failed\n> archive_command or wrong settings that caused the WAL files growth,\n> the server will be able to do the other cleanup and come online\n> cleanly. I'm sure many would have faced this problem of server crashes\n> due to out of disk space at some point in time in production\n> environments, especially with the older versions of postgres.\n\nFor some folks, it'd be better to avoid having a crash at all, but of\ncourse there's downsides to that too.\n\nFor others, it strikes me as clearly simpler if they could just fix the\narchive_command and start PG again rather than having to run some other\ncommand while making sure that PG isn't running (and doesn't end up\ngetting started), and then PG could go do the archiving and, ideally,\nalso be doing WAL replay as soon as there's enough space to allow it,\nand possibly even come back online once crash recovery is done and\ncontinue to archive WAL just like normal operation. I didn't really see\nthat idea addressed in your response and was hoping to get your thoughts\non it as a possible alternative approach to having a separate tool such\nas this.\n\nThanks,\n\nStephen", "msg_date": "Tue, 3 May 2022 17:17:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On 5/3/22 17:17, Stephen Frost wrote:\n> * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n>>\n>> The pg_walcleaner tool isn't intrusive in the sense that it doesn't\n>> delete the WAL files that are required for the server to come up (as\n>> it checks for the checkpoint redo WAL file), apart from this it has\n>> archive_command too so no loss of the WAL file(s) at all unlike the\n>> pgbackrest option.\n> \n> Won't be any WAL loss with pgbackrest unless it's specifically\n> configured to throw it away- again, it's a tradeoff. Just suggesting\n> that we could have that be part of core as an option.\n\nTo be clear, pgBackRest never deletes WAL from the pg_wal directory (or \nmodifies that directory in any way). If archive-push-queue-max is \nconfigured that simply means it will notify Postgres that WAL have been \narchived if the max queue size has been exceeded (even though they have \nnot been archived).\n\nThis should never lead to WAL being required for crash recovery being \ndeleted unless there is a bug in Postgres.\n\nBut yeah, if they configure it there could be a loss of PITR capability.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 3 May 2022 17:47:21 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On 25.04.22 20:39, Stephen Frost wrote:\n> All of which isn't an issue if we don't have an external tool trying to\n> do this and instead have the server doing it as the server knows its\n> internal status, that the archive command has been failing long enough\n> to pass the configuration threshold, and that the WAL isn't needed for\n> crash recovery. The ability to avoid having to crash and go through\n> that process is pretty valuable. Still, a crash may still happen and\n> it'd be useful to have a clean way to deal with it. I'm not really a\n> fan of having to essentially configure this external command as well as\n> have the server configured. Have we settled that there's no way to make\n> the server archive while there's no space available and before trying to\n> write out more data?\n\nI would also be in favor of not having an external command and instead \npursue a solution built into the server along the ways you have \noutlined. Besides the better integration and less potential for misuse \nthat can be achieved that way, maintaining a separate tool has some \nconstant overhead and if users only use it every ten years on average, \nit seems not worth it.\n\n\n", "msg_date": "Thu, 30 Jun 2022 13:03:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" }, { "msg_contents": "On Thu, Jun 30, 2022 at 5:33 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 25.04.22 20:39, Stephen Frost wrote:\n> > All of which isn't an issue if we don't have an external tool trying to\n> > do this and instead have the server doing it as the server knows its\n> > internal status, that the archive command has been failing long enough\n> > to pass the configuration threshold, and that the WAL isn't needed for\n> > crash recovery. The ability to avoid having to crash and go through\n> > that process is pretty valuable. Still, a crash may still happen and\n> > it'd be useful to have a clean way to deal with it. I'm not really a\n> > fan of having to essentially configure this external command as well as\n> > have the server configured. Have we settled that there's no way to make\n> > the server archive while there's no space available and before trying to\n> > write out more data?\n>\n> I would also be in favor of not having an external command and instead\n> pursue a solution built into the server along the ways you have\n> outlined. Besides the better integration and less potential for misuse\n> that can be achieved that way, maintaining a separate tool has some\n> constant overhead and if users only use it every ten years on average,\n> it seems not worth it.\n\nThanks for the feedback. My understanding is this: introduce a GUC\n(similar to max_slot_wal_keep_size), when set, beyond which postgres\nwill not keep the WAL files even if archiving is failing, am I right?\n\nIf my understanding is correct, are we going to say that postgres may\nnot archive \"all\" the WAL files that may be needed for PITR if the\narchiving is failing for long enough? Will this be okay in production\nenvironments?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Jul 2022 22:03:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walcleaner - new tool to detect, archive and delete the\n unneeded wal files (was Re: pg_archivecleanup - add the ability to detect,\n archive and delete the unneeded wal files on the primary)" } ]
[ { "msg_contents": "Hi,\n\nWhen periodically collecting and accumulating statistics or status information like pg_locks, pg_stat_activity, pg_prepared_xacts, etc for future troubleshooting or some reasons, I'd like to store a transaction ID of such information as 64-bit version so that the information of specified transaction easily can be identified and picked up by transaction ID. Otherwise it's not easy to distinguish transactions with the same 32-bit XID but different epoch, only by 32-bit XID.\n\nBut since pg_locks or pg_stat_activity etc returns 32-bit XID, we could not store it as 64-bit version. To improve this situation, I'd like to propose to add new function that converts the given 32-bit XID (i.e., xid data type) to 64-bit one (xid8 data type). If we do this, for example we can easily get 64-bit XID from pg_stat_activity by \"SELECT convert_xid_to_xid8(backend_xid) FROM pg_stat_activity\", if necessary. Thought?\n\nAs another approach, we can change the data type of pg_stat_activity.backend_xid or pg_locks.transaction, etc from xid to xid8. But this idea looks overkill to me, and it may break the existing applications accessing pg_locks etc. So IMO it's better to just provide the convertion function.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 24 Dec 2021 00:03:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add new function to convert 32-bit XID to 64-bit" }, { "msg_contents": "On Thu, Dec 23, 2021 at 8:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> When periodically collecting and accumulating statistics or status information like pg_locks, pg_stat_activity, pg_prepared_xacts, etc for future troubleshooting or some reasons, I'd like to store a transaction ID of such information as 64-bit version so that the information of specified transaction easily can be identified and picked up by transaction ID. Otherwise it's not easy to distinguish transactions with the same 32-bit XID but different epoch, only by 32-bit XID.\n>\n> But since pg_locks or pg_stat_activity etc returns 32-bit XID, we could not store it as 64-bit version. To improve this situation, I'd like to propose to add new function that converts the given 32-bit XID (i.e., xid data type) to 64-bit one (xid8 data type). If we do this, for example we can easily get 64-bit XID from pg_stat_activity by \"SELECT convert_xid_to_xid8(backend_xid) FROM pg_stat_activity\", if necessary. Thought?\n\nWhat will this function do?\n\n From your earlier description it looks like you want this function to\nadd epoch to the xid when making it a 64bit value. Is that right?\n\n>\n> As another approach, we can change the data type of pg_stat_activity.backend_xid or pg_locks.transaction, etc from xid to xid8. But this idea looks overkill to me, and it may break the existing applications accessing pg_locks etc. So IMO it's better to just provide the convertion function.\n\nObviously we will break backward compatibility for applications\nupgrading to a newer PostgreSQL version. Downside is applications\nusing 64bit xid will need to change their applications.\n\nIf we want to change the datatype anyway, better to create a new type\nLongTransactionId or something like to represent 64bit transaction id\nand then change these functions to use that.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 24 Dec 2021 18:42:00 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add new function to convert 32-bit XID to 64-bit" } ]
[ { "msg_contents": "Hi,\n\nIt is useful (for debugging purposes) if the checkpoint end message\nhas the checkpoint LSN and REDO LSN [1]. It gives more context while\nanalyzing checkpoint-related issues. The pg_controldata gives the last\ncheckpoint LSN and REDO LSN, but having this info alongside the log\nmessage helps analyze issues that happened previously, connect the\ndots and identify the root cause.\n\nAttaching a small patch herewith. Thoughts?\n\n[1]\n2021-12-23 14:58:54.694 UTC [1965649] LOG: checkpoint starting:\nshutdown immediate\n2021-12-23 14:58:54.714 UTC [1965649] LOG: checkpoint complete: wrote\n0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.001 s, sync=0.001 s, total=0.025 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB;\nLSN=0/14D0AD0, REDO LSN=0/14D0AD0\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 23 Dec 2021 20:35:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Thu, 23 Dec 2021 20:35:54 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> It is useful (for debugging purposes) if the checkpoint end message\n> has the checkpoint LSN and REDO LSN [1]. It gives more context while\n> analyzing checkpoint-related issues. The pg_controldata gives the last\n> checkpoint LSN and REDO LSN, but having this info alongside the log\n> message helps analyze issues that happened previously, connect the\n> dots and identify the root cause.\n> \n> Attaching a small patch herewith. Thoughts?\n\nA big +1 from me. I thought about proposing the same in the past.\n\n> [1]\n> 2021-12-23 14:58:54.694 UTC [1965649] LOG: checkpoint starting:\n> shutdown immediate\n> 2021-12-23 14:58:54.714 UTC [1965649] LOG: checkpoint complete: wrote\n> 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=0.001 s, sync=0.001 s, total=0.025 s; sync files=0,\n> longest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB;\n> LSN=0/14D0AD0, REDO LSN=0/14D0AD0\n\nI thougt about something like the following, but your proposal may be\nclearer.\n\n> WAL range=[0/14D0340, 0/14D0AD0]\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Dec 2021 14:51:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Dec 24, 2021 at 11:21 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 23 Dec 2021 20:35:54 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Hi,\n> >\n> > It is useful (for debugging purposes) if the checkpoint end message\n> > has the checkpoint LSN and REDO LSN [1]. It gives more context while\n> > analyzing checkpoint-related issues. The pg_controldata gives the last\n> > checkpoint LSN and REDO LSN, but having this info alongside the log\n> > message helps analyze issues that happened previously, connect the\n> > dots and identify the root cause.\n> >\n> > Attaching a small patch herewith. Thoughts?\n>\n> A big +1 from me. I thought about proposing the same in the past.\n\nThanks for the review. I've added a CF entry to not lose track -\nhttps://commitfest.postgresql.org/36/3474/.\n\n> > [1]\n> > 2021-12-23 14:58:54.694 UTC [1965649] LOG: checkpoint starting:\n> > shutdown immediate\n> > 2021-12-23 14:58:54.714 UTC [1965649] LOG: checkpoint complete: wrote\n> > 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> > write=0.001 s, sync=0.001 s, total=0.025 s; sync files=0,\n> > longest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB;\n> > LSN=0/14D0AD0, REDO LSN=0/14D0AD0\n>\n> I thougt about something like the following, but your proposal may be\n> clearer.\n>\n> > WAL range=[0/14D0340, 0/14D0AD0]\n\nYeah the proposed in the v1 is clear saying checkpoint/restartpoint\nLSN and REDO LSN.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:37:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Dec 24, 2021 at 02:51:34PM +0900, Kyotaro Horiguchi wrote:\n> I thougt about something like the following, but your proposal may be\n> clearer.\n\n+ \"LSN=%X/%X, REDO LSN=%X/%X\",\nThis could be rather confusing for the average user, even if I agree\nthat this is some information that only an advanced user could\nunderstand. Could it be possible to define those fields in a more\ndeterministic way? For one, it is hard to understand the relationship\nbetween both fields without looking at the code, particulary if both\nshare the same value. It is at least rather.. Well, mostly, easy to\nguess what each other field means in this context, which is not the\ncase of what you are proposing here. One idea could be use also\n\"start point\" for REDO, for example. \n--\nMichael", "msg_date": "Fri, 24 Dec 2021 21:11:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Dec 24, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 24, 2021 at 02:51:34PM +0900, Kyotaro Horiguchi wrote:\n> > I thougt about something like the following, but your proposal may be\n> > clearer.\n>\n> + \"LSN=%X/%X, REDO LSN=%X/%X\",\n> This could be rather confusing for the average user, even if I agree\n> that this is some information that only an advanced user could\n> understand. Could it be possible to define those fields in a more\n> deterministic way? For one, it is hard to understand the relationship\n> between both fields without looking at the code, particulary if both\n> share the same value. It is at least rather.. Well, mostly, easy to\n> guess what each other field means in this context, which is not the\n> case of what you are proposing here. One idea could be use also\n> \"start point\" for REDO, for example.\n\nHow about \"location=%X/%X, REDO start location=%X/%X\"? The entire log\nmessage can look like below:\n\n2021-12-24 12:20:19.140 UTC [1977834] LOG: checkpoint\ncomplete:location=%X/%X, REDO start location=%X/%X; wrote 7 buffers\n(0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s,\nsync=0.007 s, total=0.192 s; sync files=5, longest=0.005 s,\naverage=0.002 s; distance=293 kB, estimate=56584 kB\n\nAnother variant:\n2021-12-24 12:20:19.140 UTC [1977834] LOG: checkpoint completed at\nlocation=%X/%X with REDO start location=%X/%X: wrote 7 buffers (0.0%);\n0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s, sync=0.007\ns, total=0.192 s; sync files=5, longest=0.005 s, average=0.002 s;\ndistance=293 kB, estimate=56584 kB\n2021-12-24 12:20:19.140 UTC [1977834] LOG: restartpoint completed at\nlocation=%X/%X with REDO start location=%X/%X: wrote 7 buffers (0.0%);\n0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s, sync=0.007\ns, total=0.192 s; sync files=5, longest=0.005 s, average=0.002 s;\ndistance=293 kB, estimate=56584 kB\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:54:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Dec 24, 2021 at 5:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 24, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Dec 24, 2021 at 02:51:34PM +0900, Kyotaro Horiguchi wrote:\n> > > I thougt about something like the following, but your proposal may be\n> > > clearer.\n> >\n> > + \"LSN=%X/%X, REDO LSN=%X/%X\",\n> > This could be rather confusing for the average user, even if I agree\n> > that this is some information that only an advanced user could\n> > understand. Could it be possible to define those fields in a more\n> > deterministic way? For one, it is hard to understand the relationship\n> > between both fields without looking at the code, particulary if both\n> > share the same value. It is at least rather.. Well, mostly, easy to\n> > guess what each other field means in this context, which is not the\n> > case of what you are proposing here. One idea could be use also\n> > \"start point\" for REDO, for example.\n>\n> How about \"location=%X/%X, REDO start location=%X/%X\"? The entire log\n> message can look like below:\n>\n> 2021-12-24 12:20:19.140 UTC [1977834] LOG: checkpoint\n> complete:location=%X/%X, REDO start location=%X/%X; wrote 7 buffers\n> (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s,\n> sync=0.007 s, total=0.192 s; sync files=5, longest=0.005 s,\n> average=0.002 s; distance=293 kB, estimate=56584 kB\n>\n> Another variant:\n> 2021-12-24 12:20:19.140 UTC [1977834] LOG: checkpoint completed at\n> location=%X/%X with REDO start location=%X/%X: wrote 7 buffers (0.0%);\n> 0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s, sync=0.007\n> s, total=0.192 s; sync files=5, longest=0.005 s, average=0.002 s;\n> distance=293 kB, estimate=56584 kB\n> 2021-12-24 12:20:19.140 UTC [1977834] LOG: restartpoint completed at\n> location=%X/%X with REDO start location=%X/%X: wrote 7 buffers (0.0%);\n> 0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s, sync=0.007\n> s, total=0.192 s; sync files=5, longest=0.005 s, average=0.002 s;\n> distance=293 kB, estimate=56584 kB\n\nHere are the 2 patches.\n\none(v2-1-XXX.patch) adding the info as:\n2021-12-28 02:44:34.870 UTC [2384386] LOG: checkpoint complete:\nlocation=0/1B03040, REDO start location=0/1B03008; wrote 466 buffers\n(2.8%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.014 s,\nsync=0.038 s, total=0.072 s; sync files=21, longest=0.022 s,\naverage=0.002 s; distance=6346 kB, estimate=6346 kB\n\nanother(v2-2-XXX.patch) adding the info as:\n2021-12-28 02:52:24.464 UTC [2394396] LOG: checkpoint completed at\nlocation=0/212FFC8 with REDO start location=0/212FF90: wrote 451\nbuffers (2.8%); 0 WAL file(s) added, 0 removed, 1 recycled;\nwrite=0.012 s, sync=0.032 s, total=0.071 s; sync files=6,\nlongest=0.022 s, average=0.006 s; distance=6272 kB, estimate=6272 kB\n\nattaching v1-0001-XXX from the initial mail again just for the sake of\ncompletion:\n2021-12-23 14:58:54.714 UTC [1965649] LOG: checkpoint complete: wrote\n0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.001 s, sync=0.001 s, total=0.025 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB;\nLSN=0/14D0AD0, REDO LSN=0/14D0AD0\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 28 Dec 2021 08:26:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 28 Dec 2021 08:26:28 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Dec 24, 2021 at 5:54 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Dec 24, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Fri, Dec 24, 2021 at 02:51:34PM +0900, Kyotaro Horiguchi wrote:\n> > > > I thougt about something like the following, but your proposal may be\n> > > > clearer.\n> > >\n> > > + \"LSN=%X/%X, REDO LSN=%X/%X\",\n> > > This could be rather confusing for the average user, even if I agree\n> > > that this is some information that only an advanced user could\n> > > understand. Could it be possible to define those fields in a more\n> > > deterministic way? For one, it is hard to understand the relationship\n> > > between both fields without looking at the code, particulary if both\n> > > share the same value. It is at least rather.. Well, mostly, easy to\n> > > guess what each other field means in this context, which is not the\n> > > case of what you are proposing here. One idea could be use also\n> > > \"start point\" for REDO, for example.\n> >\n> > How about \"location=%X/%X, REDO start location=%X/%X\"? The entire log\n> > message can look like below:\n..\n> Thoughts?\n\nIt seems to me \"LSN\" or just \"location\" is more confusing or\nmysterious than \"REDO LSN\" for the average user. If we want to avoid\nbeing technically too detailed, we would use just \"start LSN=%X/%X,\nend LSN=%X/%X\". And it is equivalent to \"WAL range=[%X/%X, %X/%X]\"..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Jan 2022 10:49:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Hi,\n\nOn Tue, Dec 28, 2021 at 10:56 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> attaching v1-0001-XXX from the initial mail again just for the sake of\n> completion:\n\nUnfortunately this breaks the cfbot as it tries to apply this patch\ntoo: http://cfbot.cputube.org/patch_36_3474.log.\n\nFor this kind of situation I think that the usual solution is to use a\n.txt extension to make sure that the cfbot won't try to apply it.\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:09:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> For this kind of situation I think that the usual solution is to use a\n> .txt extension to make sure that the cfbot won't try to apply it.\n\nYeah ... this has come up before. Is there a documented way to\nattach files that the cfbot will ignore?\n\nTwo specific scenarios seem to be interesting:\n\n1. You are attaching patch(es) plus some non-patch files\n\n2. You are attaching some random files, and would like to not\ndisplace the cfbot's idea of the latest patchset.\n\nI don't know exactly how to do either of those.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jan 2022 01:19:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "On Wed, Jan 12, 2022 at 01:19:22AM -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > For this kind of situation I think that the usual solution is to use a\n> > .txt extension to make sure that the cfbot won't try to apply it.\n> \n> Yeah ... this has come up before. Is there a documented way to\n> attach files that the cfbot will ignore?\n\nNot that I know of unfortunately. I think the apply part is done by\nhttps://github.com/macdice/cfbot/blob/master/cfbot_patchburner_chroot_ctl.sh#L93-L120,\nwhich seems reasonable. So basically any extension outside of those could be\nused, excluding of course anything clearly suspicious that would be rejected by\nmany email providers.\n\n> Two specific scenarios seem to be interesting:\n> \n> 1. You are attaching patch(es) plus some non-patch files\n> \n> 2. You are attaching some random files, and would like to not\n> displace the cfbot's idea of the latest patchset.\n\nI'm assuming that someone wanting to send an additional patch to be applied on\ntop of the OP patchset is part of 2?\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:31:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Jan 12, 2022 at 01:19:22AM -0500, Tom Lane wrote:\n>> 2. You are attaching some random files, and would like to not\n>> displace the cfbot's idea of the latest patchset.\n\n> I'm assuming that someone wanting to send an additional patch to be applied on\n> top of the OP patchset is part of 2?\n\nAFAIK, if you're submitting a patch then you have to attach a complete\npatchset, or the cfbot will be totally lost. Getting the bot to\nunderstand incremental patches would be useful for sure ... but it's\noutside the scope of what I'm asking for now, which is just clear\ndocumentation of what the bot can do already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jan 2022 01:37:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "On Wed, Jan 12, 2022 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> AFAIK, if you're submitting a patch then you have to attach a complete\n> patchset, or the cfbot will be totally lost. Getting the bot to\n> understand incremental patches would be useful for sure ... but it's\n> outside the scope of what I'm asking for now, which is just clear\n> documentation of what the bot can do already.\n\nRight, but the use case I'm mentioning is a bit different: provide\nanother patch while *not* triggering the cfbot. I've seen cases in\nthe past where people want to share some code to the OP and it seems\nreasonable to allow that without risking the trigger the cfbot, at\nleast not without the OP validating or adapting the changes.\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:50:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "On Wed, Jan 12, 2022 at 7:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Jan 12, 2022 at 01:19:22AM -0500, Tom Lane wrote:\n> >> 2. You are attaching some random files, and would like to not\n> >> displace the cfbot's idea of the latest patchset.\n>\n> > I'm assuming that someone wanting to send an additional patch to be applied on\n> > top of the OP patchset is part of 2?\n>\n> AFAIK, if you're submitting a patch then you have to attach a complete\n> patchset, or the cfbot will be totally lost. Getting the bot to\n> understand incremental patches would be useful for sure ... but it's\n> outside the scope of what I'm asking for now, which is just clear\n> documentation of what the bot can do already.\n\nBy way of documentation, I've just now tried to answer these question\nin the new FAQ at:\n\nhttps://wiki.postgresql.org/wiki/Cfbot\n\n\n", "msg_date": "Wed, 12 Jan 2022 19:52:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> By way of documentation, I've just now tried to answer these question\n> in the new FAQ at:\n> https://wiki.postgresql.org/wiki/Cfbot\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jan 2022 01:58:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "On Wed, Jan 12, 2022 at 2:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> By way of documentation, I've just now tried to answer these question\n> in the new FAQ at:\n>\n> https://wiki.postgresql.org/wiki/Cfbot\n\nGreat! Thanks a lot!\n\n\n", "msg_date": "Wed, 12 Jan 2022 15:01:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot wrangling (was Re: Add checkpoint and redo LSN to\n LogCheckpointEnd log message)" }, { "msg_contents": "On Wed, Jan 12, 2022 at 11:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Tue, Dec 28, 2021 at 10:56 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > attaching v1-0001-XXX from the initial mail again just for the sake of\n> > completion:\n>\n> Unfortunately this breaks the cfbot as it tries to apply this patch\n> too: http://cfbot.cputube.org/patch_36_3474.log.\n>\n> For this kind of situation I think that the usual solution is to use a\n> .txt extension to make sure that the cfbot won't try to apply it.\n\nThanks. IMO, the following format of logging is better, so attaching\nthe v2-0001-Add-checkpoint-and-redo-LSN-to-LogCheckpointEnd-l.patch as\n.patch\n\n2021-12-28 02:52:24.464 UTC [2394396] LOG: checkpoint completed at\nlocation=0/212FFC8 with REDO start location=0/212FF90: wrote 451\nbuffers (2.8%); 0 WAL file(s) added, 0 removed, 1 recycled;\nwrite=0.012 s, sync=0.032 s, total=0.071 s; sync files=6,\nlongest=0.022 s, average=0.006 s; distance=6272 kB, estimate=6272 kB\n\nOthers are attached as .txt files.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 13 Jan 2022 11:50:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Thu, Jan 13, 2022 at 11:50 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. IMO, the following format of logging is better, so attaching\n> the v2-0001-Add-checkpoint-and-redo-LSN-to-LogCheckpointEnd-l.patch as\n> .patch\n>\n> 2021-12-28 02:52:24.464 UTC [2394396] LOG: checkpoint completed at\n> location=0/212FFC8 with REDO start location=0/212FF90: wrote 451\n> buffers (2.8%); 0 WAL file(s) added, 0 removed, 1 recycled;\n> write=0.012 s, sync=0.032 s, total=0.071 s; sync files=6,\n> longest=0.022 s, average=0.006 s; distance=6272 kB, estimate=6272 kB\n\nOne of the test cases was failing with the above style of the log\nmessage, changing \"checkpoint complete\" to \"checkpoint completed at\nlocation\" doesn't seem to be a better idea. It looks like the users or\nthe log monitoring tools might be using the same text \"checkpoint\ncomplete\", therefore I don't want to break that. Here's the v3 patch\nthat I think will work better. Please review.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 14 Jan 2022 18:52:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On 1/3/22, 5:52 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> It seems to me \"LSN\" or just \"location\" is more confusing or\r\n> mysterious than \"REDO LSN\" for the average user. If we want to avoid\r\n> being technically too detailed, we would use just \"start LSN=%X/%X,\r\n> end LSN=%X/%X\". And it is equivalent to \"WAL range=[%X/%X, %X/%X]\"..\r\n\r\nMy first instinct was that this should stay aligned with\r\npg_controldata, but that would mean using \"location=%X/%X, REDO\r\nlocation=%X/%X,\" which doesn't seem terribly descriptive. IIUC the\r\n\"checkpoint location\" is the LSN of the WAL record for the checkpoint,\r\nand the \"checkpoint's REDO location\" is the LSN where checkpoint\r\ncreation began (i.e., what you must retain for crash recovery). My\r\nvote is for \"start=%X/%X, end=%X/%X.\"\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 20 Jan 2022 00:36:32 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Thu, 20 Jan 2022 00:36:32 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 1/3/22, 5:52 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > It seems to me \"LSN\" or just \"location\" is more confusing or\n> > mysterious than \"REDO LSN\" for the average user. If we want to avoid\n> > being technically too detailed, we would use just \"start LSN=%X/%X,\n> > end LSN=%X/%X\". And it is equivalent to \"WAL range=[%X/%X, %X/%X]\"..\n> \n> My first instinct was that this should stay aligned with\n> pg_controldata, but that would mean using \"location=%X/%X, REDO\n> location=%X/%X,\" which doesn't seem terribly descriptive. IIUC the\n> \"checkpoint location\" is the LSN of the WAL record for the checkpoint,\n> and the \"checkpoint's REDO location\" is the LSN where checkpoint\n> creation began (i.e., what you must retain for crash recovery). My\n> vote is for \"start=%X/%X, end=%X/%X.\"\n\n+1. Works for me. %X/%X itself expresses it is an LSN.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Jan 2022 12:00:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Thu, Jan 20, 2022 at 6:06 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 1/3/22, 5:52 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > It seems to me \"LSN\" or just \"location\" is more confusing or\n> > mysterious than \"REDO LSN\" for the average user. If we want to avoid\n> > being technically too detailed, we would use just \"start LSN=%X/%X,\n> > end LSN=%X/%X\". And it is equivalent to \"WAL range=[%X/%X, %X/%X]\"..\n>\n> My first instinct was that this should stay aligned with\n> pg_controldata, but that would mean using \"location=%X/%X, REDO\n> location=%X/%X,\" which doesn't seem terribly descriptive. IIUC the\n> \"checkpoint location\" is the LSN of the WAL record for the checkpoint,\n> and the \"checkpoint's REDO location\" is the LSN where checkpoint\n> creation began (i.e., what you must retain for crash recovery). My\n> vote is for \"start=%X/%X, end=%X/%X.\"\n\nI'm still not clear how the REDO location can be treated as a start\nLSN? Can someone throw some light one what this checkpoint's REDO\nlocation is?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 27 Jan 2022 20:37:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Thu, Jan 27, 2022 at 08:37:37PM +0530, Bharath Rupireddy wrote:\n> I'm still not clear how the REDO location can be treated as a start\n> LSN? Can someone throw some light one what this checkpoint's REDO\n> location is?\n\nIt's the WAL insert location at the time the checkpoint began (i.e., where\nyou need to begin replaying WAL from after a crash).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Thu, 27 Jan 2022 15:57:49 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Thu, Jan 20, 2022 at 8:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 20 Jan 2022 00:36:32 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\n> > On 1/3/22, 5:52 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > > It seems to me \"LSN\" or just \"location\" is more confusing or\n> > > mysterious than \"REDO LSN\" for the average user. If we want to avoid\n> > > being technically too detailed, we would use just \"start LSN=%X/%X,\n> > > end LSN=%X/%X\". And it is equivalent to \"WAL range=[%X/%X, %X/%X]\"..\n> >\n> > My first instinct was that this should stay aligned with\n> > pg_controldata, but that would mean using \"location=%X/%X, REDO\n> > location=%X/%X,\" which doesn't seem terribly descriptive. IIUC the\n> > \"checkpoint location\" is the LSN of the WAL record for the checkpoint,\n> > and the \"checkpoint's REDO location\" is the LSN where checkpoint\n> > creation began (i.e., what you must retain for crash recovery). My\n> > vote is for \"start=%X/%X, end=%X/%X.\"\n>\n> +1. Works for me. %X/%X itself expresses it is an LSN.\n\nThanks for the review. Here's the v4 patch, please have a look.\n\nLet's not attempt to change how pg_controldata (tool and core\nfunctions) emit the start and end LSN as checkpoint_lsn/redo_lsn and\ncheckpoint location/checkpoint's REDO location.\n\n[1]\n2022-01-28 03:06:10.213 UTC [2409486] LOG: checkpoint starting:\nimmediate force wait\n2022-01-28 03:06:10.257 UTC [2409486] LOG: checkpoint complete:\nstart=0/14D9510, end=0/14D9548; wrote 4 buffers (0.0%); 0 WAL file(s)\nadded, 0 removed, 0 recycled; write=0.007 s, sync=0.008 s, total=0.044\ns; sync files=3, longest=0.005 s, average=0.003 s; distance=0 kB,\nestimate=0 kB\n\n2022-01-28 03:06:42.254 UTC [2409486] LOG: checkpoint starting:\nimmediate force wait\n2022-01-28 03:06:42.279 UTC [2409486] LOG: checkpoint complete:\nstart=0/14DBDB8, end=0/14DBDF0; wrote 2 buffers (0.0%); 0 WAL file(s)\nadded, 0 removed, 0 recycled; write=0.004 s, sync=0.004 s, total=0.025\ns; sync files=2, longest=0.003 s, average=0.002 s; distance=10 kB,\nestimate=10 kB\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 08:43:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Jan 28, 2022 at 08:43:36AM +0530, Bharath Rupireddy wrote:\n> 2022-01-28 03:06:10.213 UTC [2409486] LOG: checkpoint starting:\n> immediate force wait\n> 2022-01-28 03:06:10.257 UTC [2409486] LOG: checkpoint complete:\n> start=0/14D9510, end=0/14D9548; wrote 4 buffers (0.0%); 0 WAL file(s)\n> added, 0 removed, 0 recycled; write=0.007 s, sync=0.008 s, total=0.044\n> s; sync files=3, longest=0.005 s, average=0.003 s; distance=0 kB,\n> estimate=0 kB\n> \n> 2022-01-28 03:06:42.254 UTC [2409486] LOG: checkpoint starting:\n> immediate force wait\n> 2022-01-28 03:06:42.279 UTC [2409486] LOG: checkpoint complete:\n> start=0/14DBDB8, end=0/14DBDF0; wrote 2 buffers (0.0%); 0 WAL file(s)\n> added, 0 removed, 0 recycled; write=0.004 s, sync=0.004 s, total=0.025\n> s; sync files=2, longest=0.003 s, average=0.002 s; distance=10 kB,\n> estimate=10 kB\n\nI know I voted for \"start=%X/%X, end=%X/%X,\" but looking at this again, I\nwonder if it could be misleading. \"start\" is the redo location, and \"end\"\nis the location of the checkpoint record, but I could understand why\nsomeone might think that \"start\" is the location of the previous checkpoint\nrecord and \"end\" is the redo location of the new one. I think your\noriginal idea of \"lsn=%X/%X, redo lsn=%X/%X\" could be a good alternative.\n\nІn any case, this patch is small and otherwise looks reasonable to me, so I\nam going to mark it as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Thu, 27 Jan 2022 21:46:20 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Jan 28, 2022 at 11:16 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> I know I voted for \"start=%X/%X, end=%X/%X,\" but looking at this again, I\n> wonder if it could be misleading. \"start\" is the redo location, and \"end\"\n> is the location of the checkpoint record, but I could understand why\n> someone might think that \"start\" is the location of the previous checkpoint\n> record and \"end\" is the redo location of the new one. I think your\n> original idea of \"lsn=%X/%X, redo lsn=%X/%X\" could be a good alternative.\n>\n> Іn any case, this patch is small and otherwise looks reasonable to me, so I\n> am going to mark it as ready-for-committer.\n\nThanks for your review. In summary, we have these options to choose\ncheckpoint LSN and last REDO LSN:\n\n1) \"start=%X/%X, end=%X/%X\" (ControlFile->checkPointCopy.redo,\nControlFile->checkPoint)\n2) \"lsn=%X/%X, redo lsn=%X/%X\"\n3) \"location=%X/%X, REDO location=%X/%X\" -- similar to what\npg_controldata and pg_control_checkpoint shows currently.\n4) \"location=%X/%X, REDO start location=%X/%X\"\n\nI will leave that decision to the committer.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 28 Jan 2022 12:09:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Fri, Jan 28, 2022 at 11:16 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > I know I voted for \"start=%X/%X, end=%X/%X,\" but looking at this again, I\n> > wonder if it could be misleading. \"start\" is the redo location, and \"end\"\n> > is the location of the checkpoint record, but I could understand why\n> > someone might think that \"start\" is the location of the previous checkpoint\n> > record and \"end\" is the redo location of the new one. I think your\n> > original idea of \"lsn=%X/%X, redo lsn=%X/%X\" could be a good alternative.\n> >\n> > Іn any case, this patch is small and otherwise looks reasonable to me, so I\n> > am going to mark it as ready-for-committer.\n> \n> Thanks for your review. In summary, we have these options to choose\n> checkpoint LSN and last REDO LSN:\n> \n> 1) \"start=%X/%X, end=%X/%X\" (ControlFile->checkPointCopy.redo,\n> ControlFile->checkPoint)\n> 2) \"lsn=%X/%X, redo lsn=%X/%X\"\n> 3) \"location=%X/%X, REDO location=%X/%X\" -- similar to what\n> pg_controldata and pg_control_checkpoint shows currently.\n> 4) \"location=%X/%X, REDO start location=%X/%X\"\n> \n> I will leave that decision to the committer.\n\nI'd also vote for #2. Regarding 3 and 4, I'd argue that those should\nhave been changed when we changed a number of other things from the\ngeneric 'location' to be 'lsn' in system views and functions, and\ntherefore we should go change those to also specify 'lsn' rather than\njust saying 'location'.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Jan 2022 10:11:00 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Mon, Jan 31, 2022 at 8:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> > Thanks for your review. In summary, we have these options to choose\n> > checkpoint LSN and last REDO LSN:\n> >\n> > 1) \"start=%X/%X, end=%X/%X\" (ControlFile->checkPointCopy.redo,\n> > ControlFile->checkPoint)\n> > 2) \"lsn=%X/%X, redo lsn=%X/%X\"\n> > 3) \"location=%X/%X, REDO location=%X/%X\" -- similar to what\n> > pg_controldata and pg_control_checkpoint shows currently.\n> > 4) \"location=%X/%X, REDO start location=%X/%X\"\n> >\n> > I will leave that decision to the committer.\n>\n> I'd also vote for #2. Regarding 3 and 4, I'd argue that those should\n> have been changed when we changed a number of other things from the\n> generic 'location' to be 'lsn' in system views and functions, and\n> therefore we should go change those to also specify 'lsn' rather than\n> just saying 'location'.\n\nThanks. Here are 2 patches, 0001 for adding checkpoint lsn and redo\nlsn in the checkpoint completed message and 0002 for changing the\n\"location\" to LSN in pg_controdata's output. With the 0002,\npg_control_checkpont, pg_controldata and checkpoint completed message\nwill all be in sync with the checkpoint lsn and redo lsn.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 31 Jan 2022 23:45:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Mon, Jan 31, 2022 at 11:45:19PM +0530, Bharath Rupireddy wrote:\n> Thanks. Here are 2 patches, 0001 for adding checkpoint lsn and redo\n> lsn in the checkpoint completed message and 0002 for changing the\n> \"location\" to LSN in pg_controdata's output. With the 0002,\n> pg_control_checkpont, pg_controldata and checkpoint completed message\n> will all be in sync with the checkpoint lsn and redo lsn.\n\nI think the pg_controldata change needs some extra spaces for alignment,\nbut otherwise these patches seem reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 31 Jan 2022 10:30:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 1, 2022 at 12:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 11:45:19PM +0530, Bharath Rupireddy wrote:\n> > Thanks. Here are 2 patches, 0001 for adding checkpoint lsn and redo\n> > lsn in the checkpoint completed message and 0002 for changing the\n> > \"location\" to LSN in pg_controdata's output. With the 0002,\n> > pg_control_checkpont, pg_controldata and checkpoint completed message\n> > will all be in sync with the checkpoint lsn and redo lsn.\n>\n> I think the pg_controldata change needs some extra spaces for alignment,\n> but otherwise these patches seem reasonable to me.\n\nThanks. My bad it was. Changed in v6.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 1 Feb 2022 00:23:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 01, 2022 at 12:23:10AM +0530, Bharath Rupireddy wrote:\n> On Tue, Feb 1, 2022 at 12:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I think the pg_controldata change needs some extra spaces for alignment,\n>> but otherwise these patches seem reasonable to me.\n> \n> Thanks. My bad it was. Changed in v6.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 31 Jan 2022 17:44:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/01 10:44, Nathan Bossart wrote:\n> On Tue, Feb 01, 2022 at 12:23:10AM +0530, Bharath Rupireddy wrote:\n>> On Tue, Feb 1, 2022 at 12:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>> I think the pg_controldata change needs some extra spaces for alignment,\n>>> but otherwise these patches seem reasonable to me.\n>>\n>> Thanks. My bad it was. Changed in v6.\n\n-\t\t\t\t(errmsg(\"restartpoint complete: wrote %d buffers (%.1f%%); \"\n+\t\t\t\t(errmsg(\"restartpoint complete: lsn=%X/%X, redo lsn=%X/%X; \"\n+\t\t\t\t\t\t\"wrote %d buffers (%.1f%%); \"\n \t\t\t\t\t\t\"%d WAL file(s) added, %d removed, %d recycled; \"\n \t\t\t\t\t\t\"write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; \"\n \t\t\t\t\t\t\"sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; \"\n \t\t\t\t\t\t\"distance=%d kB, estimate=%d kB\",\n+\t\t\t\t\t\tLSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo),\n+\t\t\t\t\t\tLSN_FORMAT_ARGS(ControlFile->checkPoint),\n\nThe order of arguments for LSN seems wrong. LSN_FORMAT_ARGS(ControlFile->checkPoint) should be specified ahead of LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo)?\n\nCould you tell me why the information for LSN is reported earlierly in the log message? Since ordinally users would be more interested in the information about I/O by checkpoint, the information for LSN should be placed later? Sorry if this was already discussed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 1 Feb 2022 12:40:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 1, 2022 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> The order of arguments for LSN seems wrong. LSN_FORMAT_ARGS(ControlFile->checkPoint) should be specified ahead of LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo)?\n\nThanks. Corrected.\n\n> Could you tell me why the information for LSN is reported earlierly in the log message? Since ordinally users would be more interested in the information about I/O by checkpoint, the information for LSN should be placed later? Sorry if this was already discussed.\n\nIt is useful (for debugging purposes) if the checkpoint end message\nhas the checkpoint LSN(end) and REDO LSN(start). It gives more context\nwhile analyzing checkpoint-related issues. The pg_controldata gives\nthe last checkpoint LSN and REDO LSN, but having this info alongside\nthe log message helps analyze issues that happened previously, connect\nthe dots and identify the root cause.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 1 Feb 2022 09:31:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/01 13:01, Bharath Rupireddy wrote:\n> On Tue, Feb 1, 2022 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> The order of arguments for LSN seems wrong. LSN_FORMAT_ARGS(ControlFile->checkPoint) should be specified ahead of LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo)?\n> \n> Thanks. Corrected.\n\nThanks!\n\n>> Could you tell me why the information for LSN is reported earlierly in the log message? Since ordinally users would be more interested in the information about I/O by checkpoint, the information for LSN should be placed later? Sorry if this was already discussed.\n> \n> It is useful (for debugging purposes) if the checkpoint end message\n> has the checkpoint LSN(end) and REDO LSN(start). It gives more context\n> while analyzing checkpoint-related issues. The pg_controldata gives\n> the last checkpoint LSN and REDO LSN, but having this info alongside\n> the log message helps analyze issues that happened previously, connect\n> the dots and identify the root cause.\n\nMy previous comment was confusing... Probably I understand why you tried to put this information in checkpoint log message. But I was suggesting to put that information at the end of log message instead of the beginning of it. Because ordinary users would be less interested in this LSN information than other ones like the number of buffers written.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 1 Feb 2022 13:19:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 1, 2022 at 9:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> My previous comment was confusing... Probably I understand why you tried to put this information in checkpoint log message. But I was suggesting to put that information at the end of log message instead of the beginning of it. Because ordinary users would be less interested in this LSN information than other ones like the number of buffers written.\n\nActually, there's no strong reason to put LSN info at the beginning of\nthe message except that LSN/REDO LSN next to the\ncheckpoint/restartpoint complete would make the users understand the\nLSN and REDO LSN belong to the checkpoint/restartpoint. Since this\nwasn't a strong reason, I agree to keep it at the end.\n\nModified in v8.\n\n[1]\n2022-02-01 04:34:17.657 UTC [3597073] LOG: checkpoint complete: wrote\n21 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.004 s, sync=0.008 s, total=0.031 s; sync files=18,\nlongest=0.006 s, average=0.001 s; distance=77 kB, estimate=77 kB;\nlsn=0/14D5AF0, redo lsn=0/14D5AB8\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 1 Feb 2022 10:08:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 1 Feb 2022 10:08:04 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Feb 1, 2022 at 9:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > My previous comment was confusing... Probably I understand why you tried to put this information in checkpoint log message. But I was suggesting to put that information at the end of log message instead of the beginning of it. Because ordinary users would be less interested in this LSN information than other ones like the number of buffers written.\n> \n> Actually, there's no strong reason to put LSN info at the beginning of\n> the message except that LSN/REDO LSN next to the\n> checkpoint/restartpoint complete would make the users understand the\n> LSN and REDO LSN belong to the checkpoint/restartpoint. Since this\n> wasn't a strong reason, I agree to keep it at the end.\n> \n> Modified in v8.\n> \n> [1]\n> 2022-02-01 04:34:17.657 UTC [3597073] LOG: checkpoint complete: wrote\n> 21 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=0.004 s, sync=0.008 s, total=0.031 s; sync files=18,\n> longest=0.006 s, average=0.001 s; distance=77 kB, estimate=77 kB;\n> lsn=0/14D5AF0, redo lsn=0/14D5AB8\n\n0001 looks good to me.\n\nI tend to agree to 0002.\n\n\nFWIW, I collected other user-facing usage of \"location\" as LSN.\n\nxlog.c:5965, 6128:\n (errmsg(\"recovery stopping before WAL location (LSN) \\\"%X/%X\\\"\",\n\nxlog.c:6718:\n (errmsg(\"control file contains invalid checkpoint location\")));\n\nxlog.c:6846:\n (errmsg(\"starting point-in-time recovery to WAL location (LSN) \\\"%X/%X\\\"\",\n\nxlog.c:6929:\n (errmsg(\"could not find redo location referenced by checkpoint record\"),\n\nxlog.c:11298, 11300: (in backup-label)\n appendStringInfo(labelfile, \"START WAL LOCATION: %X/%X (file %s)\\n\",\n appendStringInfo(labelfile, \"CHECKPOINT LOCATION: %X/%X\\n\",\n (and corresponding reader-code)\n\nxlog,c:11791, 11793: (in backup history file)\n fprintf(fp, \"START WAL LOCATION: %X/%X (file %s)\\n\",\n fprintf(fp, \"STOP WAL LOCATION: %X/%X (file %s)\\n\",\n (and corresponding reader-code)\n\nrepl_scanner.l:151:\n yyerror(\"invalid streaming start location\");\n\npg_proc.dat:\n many function descriptions use \"location\" as LSN.\n\npg_waldump.c:768,777,886,938,1029,1071,1083:\n printf(_(\" -e, --end=RECPTR stop reading at WAL location RECPTR\\n\"));\n printf(_(\" -s, --start=RECPTR start reading at WAL location RECPTR\\n\"));\n pg_log_error(\"could not parse end WAL location \\\"%s\\\"\",\n pg_log_error(\"could not parse start WAL location \\\"%s\\\"\",\n pg_log_error(\"start WAL location %X/%X is not inside file \\\"%s\\\"\",\n pg_log_error(\"end WAL location %X/%X is not inside file \\\"%s\\\"\",\n pg_log_error(\"no start WAL location given\");\n\npg_basebackup.c:476, 615: (confusing with file/directory path..)\n pg_log_error(\"could not parse write-ahead log location \\\"%s\\\"\",\n pg_log_error(\"could not parse write-ahead log location \\\"%s\\\"\",\n\npg_rewind.c:346:\n pg_log_info(\"servers diverged at WAL location %X/%X on timeline %u\",\npg_rewind/timeline.c:82:\n pg_log_error(\"Expected a write-ahead log switchpoint location.\");\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Feb 2022 15:27:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 1, 2022 at 11:58 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Modified in v8.\n>\n> 0001 looks good to me.\n>\n> I tend to agree to 0002.\n\nThanks.\n\n> FWIW, I collected other user-facing usage of \"location\" as LSN.\n>\n> xlog.c:11298, 11300: (in backup-label)\n> appendStringInfo(labelfile, \"START WAL LOCATION: %X/%X (file %s)\\n\",\n> appendStringInfo(labelfile, \"CHECKPOINT LOCATION: %X/%X\\n\",\n> (and corresponding reader-code)\n>\n> xlog,c:11791, 11793: (in backup history file)\n> fprintf(fp, \"START WAL LOCATION: %X/%X (file %s)\\n\",\n> fprintf(fp, \"STOP WAL LOCATION: %X/%X (file %s)\\n\",\n> (and corresponding reader-code)\n\nI tried to change the \"location\" to \"lsn\" in most of the user-facing\nmessages/text. I refrained from changing the bakup_label file content\n(above) as it might break many applications/service layer code and\nit's not good for backward compatibility.\n\nAttaching the above changes 0003 (0001 and 0002 remain the same). If\nthe committer doesn't agree on the text or wording in 0003, I would\nlike the 0001 and 0002 to be taken here and I can start a new thread\nfor discussing 0003 separately.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 1 Feb 2022 18:33:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/01 22:03, Bharath Rupireddy wrote:\n> On Tue, Feb 1, 2022 at 11:58 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>> Modified in v8.\n>>\n>> 0001 looks good to me.\n\nI found that CreateRestartPoint() already reported the redo lsn as follows after emitting the restartpoint log message. To avoid duplicated logging of the same information, we should update this code?\n\n\tereport((log_checkpoints ? LOG : DEBUG2),\n\t\t\t(errmsg(\"recovery restart point at %X/%X\",\n\t\t\t\t\tLSN_FORMAT_ARGS(lastCheckPoint.redo)),\n\t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n\t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n\nThis code reports lastCheckPoint.redo as redo lsn. OTOH, with the patch, LogCheckpointEnd() reports ControlFile->checkPointCopy.redo. They may be different, for example, when the current DB state is not DB_IN_ARCHIVE_RECOVERY. In this case, which lsn should we report as redo lsn?\n\n+\t\t\t\t\t\t\"lsn=%X/%X, redo lsn=%X/%X\",\n\nOriginally you proposed to use upper cases for \"lsn\". But the latest patch uses the lower cases. Why? It seems better to use upper cases, i.e., LSN and REDO LSN because LSN is basically used in other errmsg().\n\n> Attaching the above changes 0003 (0001 and 0002 remain the same). If\n> the committer doesn't agree on the text or wording in 0003, I would\n> like the 0001 and 0002 to be taken here and I can start a new thread\n> for discussing 0003 separately.\n\nPersonally I'm ok with 001, but regarding 0002 and 0003 patches, I'm not sure if it's really worth replacing \"location\" with \"lsn\" there. BTW, the similar idea was proposed at [1] before, but seems \"location\" was left as it was.\n\n[1]\nhttps://postgr.es/m/20487.1494514594@sss.pgh.pa.us\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 2 Feb 2022 01:09:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Greetings,\n\n* Fujii Masao (masao.fujii@oss.nttdata.com) wrote:\n> On 2022/02/01 22:03, Bharath Rupireddy wrote:\n> >On Tue, Feb 1, 2022 at 11:58 AM Kyotaro Horiguchi\n> ><horikyota.ntt@gmail.com> wrote:\n> >>>Modified in v8.\n> >>\n> >>0001 looks good to me.\n> \n> I found that CreateRestartPoint() already reported the redo lsn as follows after emitting the restartpoint log message. To avoid duplicated logging of the same information, we should update this code?\n> \n> \tereport((log_checkpoints ? LOG : DEBUG2),\n> \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n> \t\t\t\t\tLSN_FORMAT_ARGS(lastCheckPoint.redo)),\n> \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n> \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n> \n> This code reports lastCheckPoint.redo as redo lsn. OTOH, with the patch, LogCheckpointEnd() reports ControlFile->checkPointCopy.redo. They may be different, for example, when the current DB state is not DB_IN_ARCHIVE_RECOVERY. In this case, which lsn should we report as redo lsn?\n> \n> +\t\t\t\t\t\t\"lsn=%X/%X, redo lsn=%X/%X\",\n> \n> Originally you proposed to use upper cases for \"lsn\". But the latest patch uses the lower cases. Why? It seems better to use upper cases, i.e., LSN and REDO LSN because LSN is basically used in other errmsg().\n\nWe do use 'lsn=' quite a bit in verify_nbtree.c already and lowercase is\nalso what's in the various functions and views in the catalog in the\ndatabase, of course. I don't see even one usage of \"LSN=\" in the tree\ntoday. We also use 'lsn %X/%X' in replorigindesc.c, xactdesc.c,\nxactdesc.c, tablesync.c, standby.c, parsexlog.c, then 'redo %X/%X' in\nxactdesc.c.\n\nxlog.c does have a number of \"WAL location (LSN)\", along with a bunch of\nother usages. logical.c uses both \"LSN\" and \"lsn\". worker.c uses\n\"LSN\". slot.c uses \"restart_lsn\". pg_rewind.c uses \"WAL location\" while\npg_waldump.c uses, \"lsn:\", \"WAL location\", and \"WAL record\".\n\nOverall, we don't seem to be super consistent, but I'd say that 'lsn='\nlooks the best, to my eyes anyway, and isn't out of place in the code\nbase. Lowercase seems to generally be more common overall. \n\n> >Attaching the above changes 0003 (0001 and 0002 remain the same). If\n> >the committer doesn't agree on the text or wording in 0003, I would\n> >like the 0001 and 0002 to be taken here and I can start a new thread\n> >for discussing 0003 separately.\n> \n> Personally I'm ok with 001, but regarding 0002 and 0003 patches, I'm not sure if it's really worth replacing \"location\" with \"lsn\" there. BTW, the similar idea was proposed at [1] before, but seems \"location\" was left as it was.\n> \n> [1]\n> https://postgr.es/m/20487.1494514594@sss.pgh.pa.us\n\nThis discussion strikes me as sufficient reason to make the change, with\nthe prior comment not really having all that much weight. When we're\nactually pretty consistent with one term, having random places where we\nare inconsistent leads people to be unsure about which way to go and\nthen we end up having to have this discussion. Would be great to avoid\nhaving to have it again in the future.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Feb 2022 16:05:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 1, 2022 at 9:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I found that CreateRestartPoint() already reported the redo lsn as follows after emitting the restartpoint log message. To avoid duplicated logging of the same information, we should update this code?\n>\n> ereport((log_checkpoints ? LOG : DEBUG2),\n> (errmsg(\"recovery restart point at %X/%X\",\n> LSN_FORMAT_ARGS(lastCheckPoint.redo)),\n> xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n> timestamptz_to_str(xtime)) : 0));\n>\n> This code reports lastCheckPoint.redo as redo lsn. OTOH, with the patch, LogCheckpointEnd() reports ControlFile->checkPointCopy.redo. They may be different, for example, when the current DB state is not DB_IN_ARCHIVE_RECOVERY. In this case, which lsn should we report as redo lsn?\n\nDo we ever reach CreateRestartPoint when ControlFile->stat !=\nDB_IN_ARCHIVE_RECOVERY? Assert(ControlFile->state ==\nDB_IN_ARCHIVE_RECOVERY); in CreateRestartPoint doesn't fail any\nregression tests.\n\nHere's what can happen:\nlastCheckPoint.redo is 100 and ControlFile->checkPointCopy.redo is\n105, so, \"skipping restartpoint, already performed at %X/%X\"\nLogCheckpointEnd isn't reached\nlastCheckPoint.redo is 105 and ControlFile->checkPointCopy.redo is 100\nand ControlFile->state == DB_IN_ARCHIVE_RECOVERY, then the control\nfile gets updated and LogCheckpointEnd prints the right redo lsn.\nlastCheckPoint.redo is 105 and ControlFile->checkPointCopy.redo is 100\nand ControlFile->state != DB_IN_ARCHIVE_RECOVERY, the the control file\ndoesn't get updated and LogCheckpointEnd just prints the control redo\nlsn. Looks like this case is rare given Assert(ControlFile->state ==\nDB_IN_ARCHIVE_RECOVERY); doesn't fail any tests.\n\nI think we should just let LogCheckpointEnd print the redo lsn from\nthe control file. We can just remove the above errmsg(\"recovery\nrestart point at %X/%X\" message altogether or just print it only in\nthe rare scenario, something like below:\n\nif (ControlFile->state != DB_IN_ARCHIVE_RECOVERY)\n{\n ereport((log_checkpoints ? LOG : DEBUG2),\n (errmsg(\"performed recovery restart point at %X/%X while\nthe database state is %s\",\n LSN_FORMAT_ARGS(lastCheckPoint.redo)),\ngetDBState(ControlFile->state)));\n}\n\nAnd the last commit/abort records's timestamp will always get logged\neven before we reach here in the main redo loop (errmsg(\"last\ncompleted transaction was at log time %s\").\n\nOr another way is to just pass the redo lsn to LogCheckpointEnd and\npass the lastCheckPoint.redo in if (ControlFile->state !=\nDB_IN_ARCHIVE_RECOVERY) cases or when control file isn't updated but\nrestart point happened.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Feb 2022 20:16:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/02 23:46, Bharath Rupireddy wrote:\n> On Tue, Feb 1, 2022 at 9:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> I found that CreateRestartPoint() already reported the redo lsn as follows after emitting the restartpoint log message. To avoid duplicated logging of the same information, we should update this code?\n>>\n>> ereport((log_checkpoints ? LOG : DEBUG2),\n>> (errmsg(\"recovery restart point at %X/%X\",\n>> LSN_FORMAT_ARGS(lastCheckPoint.redo)),\n>> xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n>> timestamptz_to_str(xtime)) : 0));\n>>\n>> This code reports lastCheckPoint.redo as redo lsn. OTOH, with the patch, LogCheckpointEnd() reports ControlFile->checkPointCopy.redo. They may be different, for example, when the current DB state is not DB_IN_ARCHIVE_RECOVERY. In this case, which lsn should we report as redo lsn?\n> \n> Do we ever reach CreateRestartPoint when ControlFile->stat !=\n> DB_IN_ARCHIVE_RECOVERY? Assert(ControlFile->state ==\n> DB_IN_ARCHIVE_RECOVERY); in CreateRestartPoint doesn't fail any\n> regression tests.\n\nISTM that CreateRestartPoint() can reach the condition ControlFile->state != DB_IN_ARCHIVE_RECOVERY. Please imagine the case where CreateRestartPoint() has already started and calls CheckPointGuts(). If the standby server is promoted while CreateRestartPoint() is flushing the data to disk at CheckPointGuts(), the state would be updated to DB_IN_PRODUCTION and CreateRestartPoint() can see the state != DB_IN_ARCHIVE_RECOVERY later.\n\nAs far as I read the code, this case seems to be able to make the server unrecoverable. If this case happens, since pg_control is not updated, pg_control still indicates the REDO LSN of last valid restartpoint. But CreateRestartPoint() seems to delete old WAL files based on its \"current\" REDO LSN not pg_control's REDO LSN. That is, WAL files required for the crash recovery starting from pg_control's REDO LSN would be removed.\n\nIf this understanding is right, to address this issue, probably we need to make CreateRestartPoint() do nothing (return immediately) when the state != DB_IN_ARCHIVE_RECOVERY?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:59:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Thu, 3 Feb 2022 13:59:03 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/02/02 23:46, Bharath Rupireddy wrote:\n> > On Tue, Feb 1, 2022 at 9:39 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >> I found that CreateRestartPoint() already reported the redo lsn as\n> >> follows after emitting the restartpoint log message. To avoid\n> >> duplicated logging of the same information, we should update this\n> >> code?\n> >>\n> >> ereport((log_checkpoints ? LOG : DEBUG2),\n> >> (errmsg(\"recovery restart point at %X/%X\",\n> >> LSN_FORMAT_ARGS(lastCheckPoint.redo)),\n> >> xtime ? errdetail(\"Last completed transaction was\n> >> at log time %s.\",\n> >> timestamptz_to_str(xtime))\n> >> : 0));\n> >>\n> >> This code reports lastCheckPoint.redo as redo lsn. OTOH, with the\n> >> patch, LogCheckpointEnd() reports\n> >> ControlFile->checkPointCopy.redo. They may be different, for example,\n> >> when the current DB state is not DB_IN_ARCHIVE_RECOVERY. In this case,\n> >> which lsn should we report as redo lsn?\n> > Do we ever reach CreateRestartPoint when ControlFile->stat !=\n> > DB_IN_ARCHIVE_RECOVERY? Assert(ControlFile->state ==\n> > DB_IN_ARCHIVE_RECOVERY); in CreateRestartPoint doesn't fail any\n> > regression tests.\n> \n> ISTM that CreateRestartPoint() can reach the condition\n> ControlFile->state != DB_IN_ARCHIVE_RECOVERY. Please imagine the case\n> where CreateRestartPoint() has already started and calls\n> CheckPointGuts(). If the standby server is promoted while\n> CreateRestartPoint() is flushing the data to disk at CheckPointGuts(),\n> the state would be updated to DB_IN_PRODUCTION and\n> CreateRestartPoint() can see the state != DB_IN_ARCHIVE_RECOVERY\n> later.\n\nBy the way a comment there:\n> * this is a quick hack to make sure nothing really bad happens if somehow\n> * we get here after the end-of-recovery checkpoint.\n\nnow looks a bit wrong since now it's normal that a restartpoint ends\nafter promotion.\n\n> As far as I read the code, this case seems to be able to make the\n> server unrecoverable. If this case happens, since pg_control is not\n> updated, pg_control still indicates the REDO LSN of last valid\n> restartpoint. But CreateRestartPoint() seems to delete old WAL files\n> based on its \"current\" REDO LSN not pg_control's REDO LSN. That is,\n> WAL files required for the crash recovery starting from pg_control's\n> REDO LSN would be removed.\n\nSeems right. (I didn't confirm the behavior, though\t)\n\n> If this understanding is right, to address this issue, probably we\n> need to make CreateRestartPoint() do nothing (return immediately) when\n> the state != DB_IN_ARCHIVE_RECOVERY?\n\nOn way to take. In that case should we log something like\n\"Restartpoint canceled\" or something? \n\nBy the way, restart point should start only while recoverying, and at\nthe timeof the start both checkpoint.redo and checkpoint LSN are\nalready past. We shouldn't update minRecovery point after promotion,\nbut is there any reason for not updating the checkPoint and\ncheckPointCopy? If we update them after promotion, the\nwhich-LSN-to-show problem would be gone.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 03 Feb 2022 15:50:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/03 15:50, Kyotaro Horiguchi wrote:\n> On way to take. In that case should we log something like\n> \"Restartpoint canceled\" or something?\n\n+1\n\n\n> By the way, restart point should start only while recoverying, and at\n> the timeof the start both checkpoint.redo and checkpoint LSN are\n> already past. We shouldn't update minRecovery point after promotion,\n> but is there any reason for not updating the checkPoint and\n> checkPointCopy? If we update them after promotion, the\n> which-LSN-to-show problem would be gone.\n\nI tried to find the reason by reading the past discussion, but have not found that yet.\n\nIf we update checkpoint and REDO LSN at pg_control in that case, we also need to update min recovery point at pg_control? Otherwise the min recovery point at pg_control still indicates the old LSN that previous restart point set.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 4 Feb 2022 10:59:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 4 Feb 2022 10:59:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/02/03 15:50, Kyotaro Horiguchi wrote:\n> > On way to take. In that case should we log something like\n> > \"Restartpoint canceled\" or something?\n> \n> +1\n> \n> \n> > By the way, restart point should start only while recoverying, and at\n> > the timeof the start both checkpoint.redo and checkpoint LSN are\n> > already past. We shouldn't update minRecovery point after promotion,\n> > but is there any reason for not updating the checkPoint and\n> > checkPointCopy? If we update them after promotion, the\n> > which-LSN-to-show problem would be gone.\n> \n> I tried to find the reason by reading the past discussion, but have\n> not found that yet.\n> \n> If we update checkpoint and REDO LSN at pg_control in that case, we\n> also need to update min recovery point at pg_control? Otherwise the\n> min recovery point at pg_control still indicates the old LSN that\n> previous restart point set.\n\nI had an assuption that the reason I think it shouldn't update\nminRecoveryPoint is that it has been or is going to be reset to\ninvalid LSN by promotion and the checkpoint should refrain from\ntouching it.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Feb 2022 10:16:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Mon, 07 Feb 2022 10:16:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 4 Feb 2022 10:59:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > On 2022/02/03 15:50, Kyotaro Horiguchi wrote:\n> > > By the way, restart point should start only while recoverying, and at\n> > > the timeof the start both checkpoint.redo and checkpoint LSN are\n> > > already past. We shouldn't update minRecovery point after promotion,\n> > > but is there any reason for not updating the checkPoint and\n> > > checkPointCopy? If we update them after promotion, the\n> > > which-LSN-to-show problem would be gone.\n> > \n> > I tried to find the reason by reading the past discussion, but have\n> > not found that yet.\n> > \n> > If we update checkpoint and REDO LSN at pg_control in that case, we\n> > also need to update min recovery point at pg_control? Otherwise the\n> > min recovery point at pg_control still indicates the old LSN that\n> > previous restart point set.\n> \n> I had an assuption that the reason I think it shouldn't update\n> minRecoveryPoint is that it has been or is going to be reset to\n> invalid LSN by promotion and the checkpoint should refrain from\n> touching it.\n\nHmm.. It doesn't seem to be the case. If a server crashes just after\npromotion and before requesting post-promtion checkpoint,\nminRecoveryPoint stays at a valid LSN.\n\n(Promoted at 0/7000028)\nDatabase cluster state: in production\nLatest checkpoint location: 0/6000060\nLatest checkpoint's REDO location: 0/6000028\nLatest checkpoint's REDO WAL file: 000000010000000000000006\nMinimum recovery ending location: 0/7000090\nMin recovery ending loc's timeline: 2\n\nminRecoveryPoint/TLI are ignored in any case where a server in\nin-production state is started. In other words, the values are\nuseless. There's no clear or written reason for unrecording the last\nongoing restartpoint after promotion.\n\nBefore fast-promotion was introduced, we shouldn't get there after\nend-of-recovery checkpoint (but somehow reached sometimes?) but it is\nquite normal nowadays. Or to the contrary, we're expecting it to\nhappen and it is regarded as a normal checkponit. So we should do\nthere nowadays are as the follows.\n\n- If any later checkpoint/restartpoint has been established, just skip\n remaining task then return false. (!chkpt_was_latest)\n (I'm not sure this can happen, though.)\n\n- we update control file only when archive recovery is still ongoing.\n\n- Otherwise reset minRecoveryPoint then continue.\n\nDo you have any thoughts or opinions?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 958220c495..ab8a4d9a1b 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9658,6 +9658,7 @@ CreateRestartPoint(int flags)\n \tXLogRecPtr\tendptr;\n \tXLogSegNo\t_logSegNo;\n \tTimestampTz xtime;\n+\tbool\t\tchkpt_was_latest = false;\n \n \t/* Get a local copy of the last safe checkpoint record. */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n@@ -9752,44 +9753,69 @@ CreateRestartPoint(int flags)\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Update pg_control, using current time if no later checkpoints have been\n+\t * performed.\n \t */\n \tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n \t{\n+\t\tchkpt_was_latest = true;\n \t\tControlFile->checkPoint = lastCheckPointRecPtr;\n \t\tControlFile->checkPointCopy = lastCheckPoint;\n \n \t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n+\t\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t\t * recovery is still ongoing. Normally, this will have happened\n+\t\t * already while writing out dirty buffers, but not necessarily -\n+\t\t * e.g. because no buffers were dirtied. We do this because a\n+\t\t * non-exclusive base backup uses minRecoveryPoint to determine which\n+\t\t * WAL files must be included in the backup, and the file (or files)\n+\t\t * containing the checkpoint record must be included, at a\n+\t\t * minimum. Note that for an ordinary restart of recovery there's no\n+\t\t * value in having the minimum recovery point any earlier than this\n \t\t * anyway, because redo will begin just after the checkpoint record.\n \t\t */\n-\t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n+\t\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t\t{\n-\t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n-\t\t\tControlFile->minRecoveryPointTLI = lastCheckPoint.ThisTimeLineID;\n+\t\t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n+\t\t\t{\n+\t\t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n+\t\t\t\tControlFile->minRecoveryPointTLI = lastCheckPoint.ThisTimeLineID;\n \n-\t\t\t/* update local copy */\n-\t\t\tminRecoveryPoint = ControlFile->minRecoveryPoint;\n-\t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n+\t\t\t\t/* update local copy */\n+\t\t\t\tminRecoveryPoint = ControlFile->minRecoveryPoint;\n+\t\t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n+\t\t\t}\n+\t\t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n+\t\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\t/*\n+\t\t\t * We have exited from archive-recovery mode after starting. Crash\n+\t\t\t * recovery ever after should always recover to the end of WAL\n+\t\t\t */\n+\t\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\t\tControlFile->minRecoveryPointTLI = 0;\n \t\t}\n-\t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n-\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n \t\tUpdateControlFile();\n \t}\n \tLWLockRelease(ControlFileLock);\n \n+\t/*\n+\t * Skip all post-checkpoint work if others have done that with later\n+\t * checkpoints.\n+\t */\n+\tif (!chkpt_was_latest)\n+\t{\n+\t\tereport((log_checkpoints ? LOG : DEBUG2),\n+\t\t\t\t(errmsg(\"post-restartpoint cleanup is skpped at %X/%X, because later restartpoints have been already performed\",\n+\t\t\t\t\t\tLSN_FORMAT_ARGS(lastCheckPoint.redo))));\n+\n+\t\t/* this checkpoint has not bee established */\n+\t\treturn false;\n+\t}\n+\n \t/*\n \t * Update the average distance between checkpoints/restartpoints if the\n \t * prior checkpoint exists.", "msg_date": "Mon, 07 Feb 2022 12:02:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/07 12:02, Kyotaro Horiguchi wrote:\n> - If any later checkpoint/restartpoint has been established, just skip\n> remaining task then return false. (!chkpt_was_latest)\n> (I'm not sure this can happen, though.)\n> \n> - we update control file only when archive recovery is still ongoing.\n\nThis comment seems to conflict with what your PoC patch does. Because with the patch, ControlFile->checkPoint and ControlFile->checkPointCopy seem to be updated even when ControlFile->state != DB_IN_ARCHIVE_RECOVERY.\n\nI agree with what your PoC patch does for now. When we're not in archive recovery state, checkpoint and REDO locations in pg_control should be updated but min recovery point should be reset to invalid one (which instruments that subsequent crash recovery should replay all available WAL files).\n\n> - Otherwise reset minRecoveryPoint then continue.\n> \n> Do you have any thoughts or opinions?\n\nRegarding chkpt_was_latest, whether the state is DB_IN_ARCHIVE_RECOVERY or not, if checkpoint and redo locations in pg_control are updated, IMO we don't need to skip the \"remaining tasks\". Since those locations are updated and subsequent crash recovery will start from that redo location, for example, ISTM that we can safely delete old WAL files prior to the redo location as the \"remaining tasks\". Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 7 Feb 2022 13:51:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Mon, 7 Feb 2022 13:51:31 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/02/07 12:02, Kyotaro Horiguchi wrote:\n> > - If any later checkpoint/restartpoint has been established, just skip\n> > remaining task then return false. (!chkpt_was_latest)\n> > (I'm not sure this can happen, though.)\n> > - we update control file only when archive recovery is still ongoing.\n> \n> This comment seems to conflict with what your PoC patch does. Because\n> with the patch, ControlFile->checkPoint and\n> ControlFile->checkPointCopy seem to be updated even when\n> ControlFile->state != DB_IN_ARCHIVE_RECOVERY.\n\nAh, yeah, by \"update\" I meant that \"move forward\". Sorry for confusing\nwording.\n\n> I agree with what your PoC patch does for now. When we're not in\n> archive recovery state, checkpoint and REDO locations in pg_control\n> should be updated but min recovery point should be reset to invalid\n> one (which instruments that subsequent crash recovery should replay\n> all available WAL files).\n\nYes. All buffers before the last recovery point's end have been\nflushed out so the recovery point is valid as a checkpoint. On the\nother hand minRecoveryPoint is no longer needed and actually is\nignored at the next crash recovery. We can leave it alone but it is\nconsistent that it is cleared.\n\n> > - Otherwise reset minRecoveryPoint then continue.\n> > Do you have any thoughts or opinions?\n> \n> Regarding chkpt_was_latest, whether the state is\n> DB_IN_ARCHIVE_RECOVERY or not, if checkpoint and redo locations in\n> pg_control are updated, IMO we don't need to skip the \"remaining\n> tasks\". Since those locations are updated and subsequent crash\n> recovery will start from that redo location, for example, ISTM that we\n> can safely delete old WAL files prior to the redo location as the\n> \"remaining tasks\". Thought?\n\nIf I read you correctly, the PoC works that way. It updates pg_control\nif the restart point is latest then performs the remaining cleanup\ntasks in that case. Recovery state doesn't affect this process.\n\nI reexamined about the possibility of concurrent checkpoints.\n\nBoth CreateCheckPoint and CreateRestartPoint are called from\ncheckpointer loop, shutdown handler of checkpointer and standalone\nprocess. So I can't see a possibility of concurrent checkpoints.\n\nIn the past we had a time when startup process called CreateCheckPoint\ndirectly in the crash recovery case where checkpoint is not running\nbut since 7ff23c6d27 checkpoint is started before startup process\nstarts. So I conclude that that cannot happen.\n\nSo the attached takes away the path for the case where the restart\npoint is overtaken by a concurrent checkpoint.\n\nThus.. the attached removes the ambiguity of of the proposed patch\nabout the LSNs in the restartpoint-ending log message.\n\nThoughts?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From 43888869846b6de00fbddb215300a8ff774bbc04 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Tue, 8 Feb 2022 16:42:53 +0900\nSubject: [PATCH v1] Get rid of unused path to handle concurrent checkpoints\n\nCreateRestartPoint considered the case a concurrent checkpoint is\nrunning. But after 7ff23c6d27 we no longer launch multiple checkpoints\nsimultaneously. That code path, if it is passed, may leave\nunrecoverable database by removing WAL segments that are required by\nthe last established restartpoint.\n---\n src/backend/access/transam/xlog.c | 53 +++++++++++++++++++------------\n 1 file changed, 32 insertions(+), 21 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 958220c495..01a345e8bd 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9752,29 +9752,30 @@ CreateRestartPoint(int flags)\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Update pg_control, using current time.\n \t */\n \tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\n+\t/* We mustn't have a concurrent checkpoint that advances checkpoint LSN */\n+\tAssert(lastCheckPoint.redo > ControlFile->checkPointCopy.redo);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/*\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record.\n+\t */\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9786,8 +9787,18 @@ CreateRestartPoint(int flags)\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n \t}\n+\telse\n+\t{\n+\t\t/*\n+\t\t * We have exited from archive-recovery mode after this restartpoint\n+\t\t * started. Crash recovery ever after should always recover to the end\n+\t\t * of WAL.\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\t}\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n-- \n2.27.0", "msg_date": "Tue, 08 Feb 2022 16:58:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Mmm.. checkpoint and checkpointer are quite confusing in this context..\n\nAt Tue, 08 Feb 2022 16:58:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 7 Feb 2022 13:51:31 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > \n> > \n> > On 2022/02/07 12:02, Kyotaro Horiguchi wrote:\n> > > - If any later checkpoint/restartpoint has been established, just skip\n> > > remaining task then return false. (!chkpt_was_latest)\n> > > (I'm not sure this can happen, though.)\n> > > - we update control file only when archive recovery is still ongoing.\n> > \n> > This comment seems to conflict with what your PoC patch does. Because\n> > with the patch, ControlFile->checkPoint and\n> > ControlFile->checkPointCopy seem to be updated even when\n> > ControlFile->state != DB_IN_ARCHIVE_RECOVERY.\n> \n> Ah, yeah, by \"update\" I meant that \"move forward\". Sorry for confusing\n> wording.\n\nPlease ignore the \"that\".\n\n> > I agree with what your PoC patch does for now. When we're not in\n> > archive recovery state, checkpoint and REDO locations in pg_control\n> > should be updated but min recovery point should be reset to invalid\n> > one (which instruments that subsequent crash recovery should replay\n> > all available WAL files).\n> \n> Yes. All buffers before the last recovery point's end have been\n> flushed out so the recovery point is valid as a checkpoint. On the\n> other hand minRecoveryPoint is no longer needed and actually is\n> ignored at the next crash recovery. We can leave it alone but it is\n> consistent that it is cleared.\n> \n> > > - Otherwise reset minRecoveryPoint then continue.\n> > > Do you have any thoughts or opinions?\n> > \n> > Regarding chkpt_was_latest, whether the state is\n> > DB_IN_ARCHIVE_RECOVERY or not, if checkpoint and redo locations in\n> > pg_control are updated, IMO we don't need to skip the \"remaining\n> > tasks\". Since those locations are updated and subsequent crash\n> > recovery will start from that redo location, for example, ISTM that we\n> > can safely delete old WAL files prior to the redo location as the\n> > \"remaining tasks\". Thought?\n> \n> If I read you correctly, the PoC works that way. It updates pg_control\n> if the restart point is latest then performs the remaining cleanup\n> tasks in that case. Recovery state doesn't affect this process.\n> \n> I reexamined about the possibility of concurrent checkpoints.\n> \n> Both CreateCheckPoint and CreateRestartPoint are called from\n> checkpointer loop, shutdown handler of checkpointer and standalone\n> process. So I can't see a possibility of concurrent checkpoints.\n> \n> In the past we had a time when startup process called CreateCheckPoint\n\n- directly in the crash recovery case where checkpoint is not running\n- but since 7ff23c6d27 checkpoint is started before startup process\n+ directly in the crash recovery case where checkpointer is not running\n+ but since 7ff23c6d27 checkpointer is launched before startup process\n\n> starts. So I conclude that that cannot happen.\n> \n> So the attached takes away the path for the case where the restart\n> point is overtaken by a concurrent checkpoint.\n> \n> Thus.. the attached removes the ambiguity of of the proposed patch\n> about the LSNs in the restartpoint-ending log message.\n\nThoughts?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Feb 2022 17:40:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Feb 8, 2022 at 2:10 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Thus.. the attached removes the ambiguity of of the proposed patch\n> > about the LSNs in the restartpoint-ending log message.\n>\n> Thoughts?\n\nThanks for the patch. I have few comments on the\nv1-0001-Get-rid-of-unused-path-to-handle-concurrent-check.patch\n\n1) Can we have this Assert right after \"skipping restartpoint, already\nperformed at %X/%X\" error message block? Does it make any difference?\nMy point is that if at all, we were to assert this, why can't we do it\nbefore CheckPointGuts?\n+ /* We mustn't have a concurrent checkpoint that advances checkpoint LSN */\n+ Assert(lastCheckPoint.redo > ControlFile->checkPointCopy.redo);\n+\n2) Related to the above Assert, do we really need an assertion or a FATAL error?\n3) Let's be consistent with \"crash recovery\" - replace\n\"archive-recovery\" with \"archive recovery\"?\n+ * We have exited from archive-recovery mode after this restartpoint\n+ * started. Crash recovery ever after should always recover to the end\n4) Isn't it enough to say \"Crash recovery should always recover to the\nend of WAL.\"?\n+ * started. Crash recovery ever after should always recover to the end\n5) Is there a reliable test case covering this code? Please point me\nif the test case is shared upthread somewhere.\n6) So, with this patch, the v8 patch-set posted at [1] doesn't need\nany changes IIUC. If that's the case, please feel free to post all the\npatches together such that they get tested in cfbot.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACUtZhTb%3D2ENkF3BQ3wi137uaGi__qzvXC-qFYC0XwjALw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 8 Feb 2022 14:33:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "Hi, Bharath.\n\nAt Tue, 8 Feb 2022 14:33:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Feb 8, 2022 at 2:10 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Thus.. the attached removes the ambiguity of of the proposed patch\n> > > about the LSNs in the restartpoint-ending log message.\n> >\n> > Thoughts?\n> \n> Thanks for the patch. I have few comments on the\n> v1-0001-Get-rid-of-unused-path-to-handle-concurrent-check.patch\n\nThanks for checking this!\n\n> 1) Can we have this Assert right after \"skipping restartpoint, already\n> performed at %X/%X\" error message block? Does it make any difference?\n> My point is that if at all, we were to assert this, why can't we do it\n> before CheckPointGuts?\n> + /* We mustn't have a concurrent checkpoint that advances checkpoint LSN */\n> + Assert(lastCheckPoint.redo > ControlFile->checkPointCopy.redo);\n\nNo. The assertion checks if something wrong has happend while\nCheckPointGuts - which may take a long time - is running. If we need\nto do that, it should be after CheckPointGuts. (However, I removed it\nfinally. See below.)\n\n> 2) Related to the above Assert, do we really need an assertion or a FATAL error?\n\nIt's just to make sure in case where that happens by any chance in\nfuture. But on second thought, as I mentioned, CreateRestartPoint is\ncalled from standalone process or from checkpointer. I added that\nassertion at the beginning of CreateRestartPoint. I think that\nassertion is logically equal to the old assertion.\n\nI remember that I felt uncomfortable with the lock-less behavior on\nControlFile, which makes the code a bit complex to read. So I moved\n\"PriorRedoPtr = \" line to within the lock section just below.\n\nAddition to that, I feel being confused by the parallel-use of\nlastCheckPoint.redo and RedoRecPtr So I replaced lastCheckPoint.redo\nafter assiging the former to the latter with RedoRecPtr.\n\n> 3) Let's be consistent with \"crash recovery\" - replace\n> \"archive-recovery\" with \"archive recovery\"?\n> + * We have exited from archive-recovery mode after this restartpoint\n> + * started. Crash recovery ever after should always recover to the end\n\nThat's sensible. I found several existing use of archive-recovery in\nxlog.c and a few other files but the fix for them is separated as\nanother patch (0005).\n\n> 4) Isn't it enough to say \"Crash recovery should always recover to the\n> end of WAL.\"?\n> + * started. Crash recovery ever after should always recover to the end\n\nI think we need to explicitly say something like \"we have exited\narchive recovery while *this* restartpoint is running\". I simplified\nthe sentence as the follows.\n\n+ * Archive recovery have ended. Crash recovery ever after should\n+ * always recover to the end of WAL.\n\n\n> 5) Is there a reliable test case covering this code? Please point me\n> if the test case is shared upthread somewhere.\n\nNothing. Looking from the opposite side, the existing code for the\ncompeting restartpoint/checkpoint case should have not been exercised\nat least for these several major versions. Instead, I added an\nassertion at the beginning of CreateRestartPoint that asserts that\n\"this should be called only under standalone process or from\ncheckpointer.\". If that assert doesn't fire while the whole test, it\nshould be the proof of the premise for this patch is correct.\n\n> 6) So, with this patch, the v8 patch-set posted at [1] doesn't need\n> any changes IIUC. If that's the case, please feel free to post all the\n> patches together such that they get tested in cfbot.\n\nThe two are different fixes so I don't think they are ought to be\nmerged together.\n\n> [1] - https://www.postgresql.org/message-id/CALj2ACUtZhTb%3D2ENkF3BQ3wi137uaGi__qzvXC-qFYC0XwjALw%40mail.gmail.com\n\nAs the old 0001, though I said it'fine :p) I added a comment that\nreading ControlFile->checkPoint* is safe here.\n\nThe old 0002 (attached 0003) looks file to me.\n\nThe old 0003 (attached 0004):\n\n+++ b/src/backend/access/rmgrdesc/xlogdesc.c\n-\t\tappendStringInfo(buf, \"redo %X/%X; \"\n+\t\tappendStringInfo(buf, \"redo lsn %X/%X; \"\n\nIt is shown in the context of a checkpoint record, so I think it is\nnot needed or rather lengthning the dump line uselessly. \n\n+++ b/src/backend/access/transam/xlog.c\n-\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request %X/%X, current position %X/%X\",\n+\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request lsn %X/%X, current lsn %X/%X\",\n\n+++ b/src/backend/replication/walsender.c\n-\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush position of this server %X/%X\",\n+\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush LSN of this server %X/%X\",\n\n\"WAL\" is upper-cased. So it seems rather strange that the \"lsn\" is\nlower-cased. In the first place the message doesn't look like a\nuser-facing error message and I feel we don't need position or lsn\nthere..\n\n+++ b/src/bin/pg_rewind/pg_rewind.c\n-\t\tpg_log_info(\"servers diverged at WAL location %X/%X on timeline %u\",\n+\t\tpg_log_info(\"servers diverged at WAL LSN %X/%X on timeline %u\",\n\nI feel that we don't need \"WAL\" there.\n\n+++ b/src/bin/pg_waldump/pg_waldump.c\n-\tprintf(_(\" -e, --end=RECPTR stop reading at WAL location RECPTR\\n\"));\n+\tprintf(_(\" -e, --end=RECPTR stop reading at WAL LSN RECPTR\\n\"));\n\nMmm.. \"WAL LSN RECPTR\" looks strange to me. In the first place I\ndon't think \"RECPTR\" is a user-facing term. Doesn't something like the\nfollows work?\n\n+\tprintf(_(\" -e, --end=WAL-LSN stop reading at WAL-LSN\\n\"));\n\nIn some changes in this patch shorten the main message text of\nfprintf-ish functions. That makes the succeeding parameters can be\ninlined.\n\n\n0001: The fix of CreateRestartPoint\n0002: Add LSNs to checkpoint logs (the main patch)\n0003: Change \"location\" to \"LSN\" of pg_controldata\n0004: The same as 0003 for other uses of \"location\".\n0005: Unhyphnate \"-recovery\" terms.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 09 Feb 2022 11:52:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/09 11:52, Kyotaro Horiguchi wrote:\n> 0001: The fix of CreateRestartPoint\n\nThis patch might be ok for the master branch. But since concurrent checkpoint and restartpoint can happen in v14 or before, we would need another patch based on that assumption, for the backport. How about fixing the bug all the branches at first, then apply this patch in the master to improve the code?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 11 Feb 2022 01:00:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 11 Feb 2022 01:00:03 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/02/09 11:52, Kyotaro Horiguchi wrote:\n> > 0001: The fix of CreateRestartPoint\n> \n> This patch might be ok for the master branch. But since concurrent\n> checkpoint and restartpoint can happen in v14 or before, we would need\n> another patch based on that assumption, for the backport. How about\n> fixing the bug all the branches at first, then apply this patch in the\n> master to improve the code?\n\nFor backbranches, the attached for pg14 does part of the full patch.\nOf the following, I think we should do (a) and (b) to make future\nbackpatchings easier.\n\na) Use RedoRecPtr and PriorRedoPtr after they are assigned.\n\nb) Move assignment to PriorRedoPtr into the ControlFileLock section.\n\nc) Skip udpate of minRecoveryPoint only when the checkpoint gets old.\n\nd) Skip call to UpdateCheckPointDistanceEstimate() when RedoRecPtr <=\n PriorRedoPtr.\n\n# Mmm. The v9-0001 contains a silly mistake here..\n\nStill I'm not sure whether that case really happens and how checkpoint\nbehaves *after* that happenes, but at least it protects database from\nthe possible unrecoverable state due to the known issue here..\n\nIt doesn't apply even on pg13 (due to LSN_FORMAT_ARGS). I will make\nthe per-version patches if you are fine with this.\n\nregards.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 14 Feb 2022 14:40:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Mon, 14 Feb 2022 14:40:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> It doesn't apply even on pg13 (due to LSN_FORMAT_ARGS). I will make\n> the per-version patches if you are fine with this.\n\nOops! I forgot to rename the patch to avoid confusion on CF-bots.\nI'll resend new version soon to avoid the confusion..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Feb 2022 14:42:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Mon, 14 Feb 2022 14:42:18 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I'll resend new version soon to avoid the confusion..\n\nIn this version , 0001 gets one fix and two comment updates.\n\n-\t\t * Archive recovery have ended. Crash recovery ever after should always\n+\t\t * Archive recovery has ended. Crash recovery ever after should always\n\n-\t/* the second term is just in case */\n-\tif (PriorRedoPtr != InvalidXLogRecPtr || RedoRecPtr > PriorRedoPtr)\n+ \t/*\n+ \t * Update the average distance between checkpoints/restartpoints if the\n+\t * prior checkpoint exists. The second term is just in case.\n+ \t */\n+\tif (PriorRedoPtr != InvalidXLogRecPtr && RedoRecPtr > PriorRedoPtr)\n \t\tUpdateCheckPointDistanceEstimate(RedoRecPtr - PriorRedoPtr);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 14 Feb 2022 14:52:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/02/14 14:40, Kyotaro Horiguchi wrote:\n> For backbranches, the attached for pg14 does part of the full patch.\n\nThanks for updating the patch!\n\n\n> Of the following, I think we should do (a) and (b) to make future\n> backpatchings easier.\n> \n> a) Use RedoRecPtr and PriorRedoPtr after they are assigned.\n> \n> b) Move assignment to PriorRedoPtr into the ControlFileLock section.\n\nI failed to understand how (a) and (b) can make the backpatching easier. How easy to backpatch seems the same whether we apply (a) and (b) or not...\n\n\n> c) Skip udpate of minRecoveryPoint only when the checkpoint gets old.\n\nYes.\n\n\n> d) Skip call to UpdateCheckPointDistanceEstimate() when RedoRecPtr <=\n> PriorRedoPtr.\n\nBut \"RedoRecPtr <= PriorRedoPtr\" will never happen, will it? Because a restartpoint is skipped at the beginning of CreateRestartPoint() in that case. If this understanding is right, the check of \"RedoRecPtr <= PriorRedoPtr\" is not necessary before calling UpdateCheckPointDistanceEstimate().\n\n\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n\nDon't we need to update LocalMinRecoveryPoint and LocalMinRecoveryPointTLI after this? Maybe it's not necessary, but ISTM that it's safer and better to always update them whether the state is DB_IN_ARCHIVE_RECOVERY or not.\n\n\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n\nSame as above. IMO it's safer and better to always update the state (whether the state is DB_IN_ARCHIVE_RECOVERY or not) if CHECKPOINT_IS_SHUTDOWN flag is passed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 22 Feb 2022 01:59:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 22 Feb 2022 01:59:45 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> > Of the following, I think we should do (a) and (b) to make future\n> > backpatchings easier.\n> > a) Use RedoRecPtr and PriorRedoPtr after they are assigned.\n> > b) Move assignment to PriorRedoPtr into the ControlFileLock section.\n> \n> I failed to understand how (a) and (b) can make the backpatching\n> easier. How easy to backpatch seems the same whether we apply (a) and\n> (b) or not...\n\nThat premises that the patch applied to master contains (a) and (b).\nSo if it doesn't, those are not need by older branches.\n\n\n> > c) Skip udpate of minRecoveryPoint only when the checkpoint gets old.\n> \n> Yes.\n> \n> \n> > d) Skip call to UpdateCheckPointDistanceEstimate() when RedoRecPtr <=\n> > PriorRedoPtr.\n> \n> But \"RedoRecPtr <= PriorRedoPtr\" will never happen, will it? Because a\n\nI didn't believe that it happens. (So, it came from my\nconvervativeness, or laziness, or both:p) The code dates from 2009 and\nStartupXLOG makes a concurrent checkpoint with bgwriter. But as of at\nleast 9.5, StartupXLOG doesn't directly call CreateCheckPoint. So I\nthink that won't happen.\n\nSo, in short, I agree to remove it or turn it into Assert().\n\n> restartpoint is skipped at the beginning of CreateRestartPoint() in\n> that case. If this understanding is right, the check of \"RedoRecPtr <=\n> PriorRedoPtr\" is not necessary before calling\n> UpdateCheckPointDistanceEstimate().\n> \n> \n> +\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n> +\t\tControlFile->minRecoveryPointTLI = 0;\n> \n> Don't we need to update LocalMinRecoveryPoint and\n> LocalMinRecoveryPointTLI after this? Maybe it's not necessary, but\n> ISTM that it's safer and better to always update them whether the\n> state is DB_IN_ARCHIVE_RECOVERY or not.\n\nAgree that it's safer and tidy.\n\n> \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n> \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n> \n> Same as above. IMO it's safer and better to always update the state\n> (whether the state is DB_IN_ARCHIVE_RECOVERY or not) if\n> CHECKPOINT_IS_SHUTDOWN flag is passed.\n\nThat means we may exit recovery mode after ShutdownXLOG called\nCreateRestartPoint. I don't think that may happen. So I'd rather add\nAssert ((flags&CHECKPOINT_IS_SHUTDOWN) == 0) there instaed.\n\nI'll post the new version later.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Feb 2022 17:44:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 22 Feb 2022 17:44:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 22 Feb 2022 01:59:45 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > \n> > > Of the following, I think we should do (a) and (b) to make future\n> > > backpatchings easier.\n> > > a) Use RedoRecPtr and PriorRedoPtr after they are assigned.\n> > > b) Move assignment to PriorRedoPtr into the ControlFileLock section.\n> > \n> > I failed to understand how (a) and (b) can make the backpatching\n> > easier. How easy to backpatch seems the same whether we apply (a) and\n> > (b) or not...\n> \n> That premises that the patch applied to master contains (a) and (b).\n> So if it doesn't, those are not need by older branches.\n\nI was once going to remove them. But according the discussion below,\nthe patch for back-patching is now quite close to that for the master\nbranch. So I left them alone.\n\n> > > d) Skip call to UpdateCheckPointDistanceEstimate() when RedoRecPtr <=\n> > > PriorRedoPtr.\n> > \n> > But \"RedoRecPtr <= PriorRedoPtr\" will never happen, will it? Because a\n> \n> I didn't believe that it happens. (So, it came from my\n> convervativeness, or laziness, or both:p) The code dates from 2009 and\n> StartupXLOG makes a concurrent checkpoint with bgwriter. But as of at\n> least 9.5, StartupXLOG doesn't directly call CreateCheckPoint. So I\n> think that won't happen.\n> \n> So, in short, I agree to remove it or turn it into Assert().\n\nIt was a bit out of point. If we assume RedoRecPtr is always larger\nthan PriorRedoPtr and then we don't need to check that there, we\nshould also remove the \"if (PriorRedoPtr < RedoRecPtr)\" branch just\nabove, which means the patch for back-branches gets very close to that\nfor the master. Do we make such a large change on back branches?\nAnyways this version once takes that way.\n\n> > \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n> > \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n> > \n> > Same as above. IMO it's safer and better to always update the state\n> > (whether the state is DB_IN_ARCHIVE_RECOVERY or not) if\n> > CHECKPOINT_IS_SHUTDOWN flag is passed.\n> \n> That means we may exit recovery mode after ShutdownXLOG called\n> CreateRestartPoint. I don't think that may happen. So I'd rather add\n> Assert ((flags&CHECKPOINT_IS_SHUTDOWN) == 0) there instaed.\n\nSo this version for v14 gets updated in the following points.\n\nCompletely removed the code path for the case some other process runs\nsimultaneous checkpoint.\n\nRemoved the condition (RedoRecPtr > PriorRedoPtr) for\nUpdateCheckPointDistanceEstimate() call.\n\nAdded an assertion to the recoery-end path.\n\n# Honestly I feel this is a bit too much for back-patching, though.\n\nWhile making patches for v12, I see a test failure of pg_rewind for\nuncertain reason. I'm investigating that but I post this for\ndiscussion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From e983f3d4c2dbeea742aed0ef1e209e7821f6687f Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Mon, 14 Feb 2022 13:04:33 +0900\nSubject: [PATCH v2] Correctly update contfol file at the end of archive\n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 ++++++++++++++++++++-----------\n 1 file changed, 47 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 6208e123e5..ff4a90eacc 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9587,6 +9587,9 @@ CreateRestartPoint(int flags)\n \tXLogSegNo\t_logSegNo;\n \tTimestampTz xtime;\n \n+\t/* we don't assume concurrent checkpoint/restartpoint to run */\n+\tAssert (!IsUnderPostmaster || MyBackendType == B_CHECKPOINTER);\n+\n \t/* Get a local copy of the last safe checkpoint record. */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n \tlastCheckPointRecPtr = XLogCtl->lastCheckPointRecPtr;\n@@ -9653,7 +9656,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9672,7 +9675,10 @@ CreateRestartPoint(int flags)\n \t/* Update the process title */\n \tupdate_checkpoint_display(flags, true, false);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9680,31 +9686,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record. This is a quick hack to make sure nothing really bad\n+\t * happens if somehow we get here after the end-of-recovery checkpoint.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9716,8 +9720,25 @@ CreateRestartPoint(int flags)\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9804,7 +9825,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\tLSN_FORMAT_ARGS(lastCheckPoint.redo)),\n+\t\t\t\t\tLSN_FORMAT_ARGS(RedoRecPtr)),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0\n\n\n From 13329169b996509a3a853afb9c283c3b27e0eab7 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 14:46:41 +0900\nSubject: [PATCH v2] Correctly update contfol file at the end of archive \n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 ++++++++++++++++++++-----------\n 1 file changed, 47 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 3d76fad128..3670ff81e7 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9376,6 +9376,9 @@ CreateRestartPoint(int flags)\n \t */\n \tLWLockAcquire(CheckpointLock, LW_EXCLUSIVE);\n \n+\t/* we don't assume concurrent checkpoint/restartpoint to run */\n+\tAssert (!IsUnderPostmaster || MyBackendType == B_CHECKPOINTER);\n+\n \t/* Get a local copy of the last safe checkpoint record. */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n \tlastCheckPointRecPtr = XLogCtl->lastCheckPointRecPtr;\n@@ -9445,7 +9448,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9461,7 +9464,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9469,31 +9475,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record. This is a quick hack to make sure nothing really bad\n+\t * happens if somehow we get here after the end-of-recovery checkpoint.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9505,8 +9509,25 @@ CreateRestartPoint(int flags)\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9590,7 +9611,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0", "msg_date": "Fri, 25 Feb 2022 15:31:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 25 Feb 2022 15:31:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> While making patches for v12, I see a test failure of pg_rewind for\n> uncertain reason. I'm investigating that but I post this for\n> discussion.\n\nHmm. Too stupid. Somehow I overly removed the latchet condition for\nminRecoveryPoint. So the same patch worked for v12.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 25 Feb 2022 16:06:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 25 Feb 2022 16:06:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 25 Feb 2022 15:31:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > While making patches for v12, I see a test failure of pg_rewind for\n> > uncertain reason. I'm investigating that but I post this for\n> > discussion.\n> \n> Hmm. Too stupid. Somehow I overly removed the latchet condition for\n> minRecoveryPoint. So the same patch worked for v12.\n\nSo, this is the patches for pg12-10. 11 can share the same patch with\n12. 10 has differences in two points.\n\n10 has ControlFile->prevCheckPoint.\n\nThe DETAILS of the \"recovery restart point at\" message is not\ncapitalized. But I suppose it is so close to EOL so that we don't\nwant to \"fix\" it risking existing usecases.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From c89e2b509723b68897f2af49a154af2a69f0747b Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 15:04:00 +0900\nSubject: [PATCH v3] Correctly update contfol file at the end of archive\n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 71 +++++++++++++++++++------------\n 1 file changed, 44 insertions(+), 27 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 885558f291..2b2568c475 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9334,7 +9334,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9350,7 +9350,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9358,31 +9361,28 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9393,9 +9393,26 @@ CreateRestartPoint(int flags)\n \t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n-\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n+\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9470,7 +9487,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0\n\n\n From 7dd174d165b3639b573bfc47c2e8b2fba61395c5 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 16:35:16 +0900\nSubject: [PATCH v3] Correctly update contfol file at the end of archive\n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 +++++++++++++++++++------------\n 1 file changed, 45 insertions(+), 28 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex c64febdb53..9fb66ad7d5 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9434,7 +9434,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9450,7 +9450,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo pointer, used later to determine\n@@ -9458,32 +9461,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->prevCheckPoint = ControlFile->checkPoint;\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still running. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->prevCheckPoint = ControlFile->checkPoint;\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9494,9 +9494,26 @@ CreateRestartPoint(int flags)\n \t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n-\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n+\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9579,7 +9596,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"last completed transaction was at log time %s\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0", "msg_date": "Fri, 25 Feb 2022 16:47:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 25 Feb 2022 16:47:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So, this is the patches for pg12-10. 11 can share the same patch with\n> 12. 10 has differences in two points.\n> \n> 10 has ControlFile->prevCheckPoint.\n> \n> The DETAILS of the \"recovery restart point at\" message is not\n> capitalized. But I suppose it is so close to EOL so that we don't\n> want to \"fix\" it risking existing usecases.\n\nUgh! Wait for a moment. Something's wrong.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 25 Feb 2022 16:52:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Fri, 25 Feb 2022 16:52:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Ugh! Wait for a moment. Something's wrong.\n\nSorry, what is wrong was my working directory. It was broken by my\nbogus operation. All the files apply corresponding versions correctly.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 25 Feb 2022 17:14:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Mon, 14 Feb 2022 14:52:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> In this version , 0001 gets one fix and two comment updates.\n\nWhile the disucssion on back-patching of 0001 proceeding on another\nbranch, the main patch iself gets looks like as if rotten on\nCF-App. So I rebased v10 on the current master. 0001 is replaced by\nan adjusted patch based on the latest \"control file update fix\" patch.\n\nIf someone wants to voice on the message-fix patches (0002-0004), be\nour guest. 0005 also wants opinions.\n\n\n0001: Fix possible incorrect controlfile update that leads to\n unrecoverable database.\n\n0002: Add REDO/Checkpiont LSNs to checkpoinkt-end log message.\n (The main patch in this thread)\n\n0003: Replace (WAL-)location to LSN in pg_controldata.\n\n0004: Replace (WAL-)location to LSN in user-facing texts.\n (This doesn't reflect my recent comments.)\n\n0005: Unhyphenate the word archive-recovery and similars.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 04 Mar 2022 14:10:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Fri, Mar 4, 2022 at 10:40 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 14 Feb 2022 14:52:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > In this version , 0001 gets one fix and two comment updates.\n>\n> While the disucssion on back-patching of 0001 proceeding on another\n> branch, the main patch iself gets looks like as if rotten on\n> CF-App. So I rebased v10 on the current master. 0001 is replaced by\n> an adjusted patch based on the latest \"control file update fix\" patch.\n>\n> 0001: Fix possible incorrect controlfile update that leads to\n> unrecoverable database.\n\n0001 - I don't think you need to do this as UpdateControlFile\n(update_controlfile) will anyway update it, no?\n+ /* Update control file using current time */\n+ ControlFile->time = (pg_time_t) time(NULL);\n\n> 0002: Add REDO/Checkpiont LSNs to checkpoinkt-end log message.\n> (The main patch in this thread)\n\n0002 - If at all the intention is to say that no ControlFileLock is\nrequired while reading ControlFile->checkPoint and\nControlFile->checkPointCopy.redo, let's say it, no? How about\nsomething like \"No ControlFileLock is required while reading\nControlFile->checkPoint and ControlFile->checkPointCopy.redo as there\ncan't be any other process updating them concurrently.\"?\n\n+ /* we are the only updator of these variables */\n+ LSN_FORMAT_ARGS(ControlFile->checkPoint),\n+ LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo))));\n\n> 0003: Replace (WAL-)location to LSN in pg_controldata.\n>\n> 0004: Replace (WAL-)location to LSN in user-facing texts.\n> (This doesn't reflect my recent comments.)\n\nIf you don't mind, can you please put the comments here?\n\n> 0005: Unhyphenate the word archive-recovery and similars.\n\n0005 - How about replacing \"crash-recovery\" to \"crash recovery\" in\npostgres-ref.sgml too?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Mar 2022 12:19:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 15 Mar 2022 12:19:47 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Mar 4, 2022 at 10:40 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> 0001 - I don't think you need to do this as UpdateControlFile\n> (update_controlfile) will anyway update it, no?\n> + /* Update control file using current time */\n> + ControlFile->time = (pg_time_t) time(NULL);\n\nUgh.. Yes. It is a copy-pasto from older versions. They may have the\nsame copy-pasto..\n\n\n\n> > 0002: Add REDO/Checkpiont LSNs to checkpoinkt-end log message.\n> > (The main patch in this thread)\n> \n> 0002 - If at all the intention is to say that no ControlFileLock is\n> required while reading ControlFile->checkPoint and\n> ControlFile->checkPointCopy.redo, let's say it, no? How about\n> something like \"No ControlFileLock is required while reading\n> ControlFile->checkPoint and ControlFile->checkPointCopy.redo as there\n> can't be any other process updating them concurrently.\"?\n> \n> + /* we are the only updator of these variables */\n> + LSN_FORMAT_ARGS(ControlFile->checkPoint),\n> + LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo))));\n\nI thought the comment explains that. But it would be better to be more\nspecific. It is changed as the follows.\n\n> * ControlFileLock is not required as we are the only\n> * updator of these variables.\n\n\n> > 0003: Replace (WAL-)location to LSN in pg_controldata.\n> >\n> > 0004: Replace (WAL-)location to LSN in user-facing texts.\n> > (This doesn't reflect my recent comments.)\n> \n> If you don't mind, can you please put the comments here?\n\nOkay. It's the following message.\n\nhttps://www.postgresql.org/message-id/20220209.115204.1794224638476710282.horikyota.ntt@gmail.com\n\n> The old 0003 (attached 0004):\n> \n> \n> \n> +++ b/src/backend/access/rmgrdesc/xlogdesc.c\n> -\t\tappendStringInfo(buf, \"redo %X/%X; \"\n> +\t\tappendStringInfo(buf, \"redo lsn %X/%X; \"\n> \n> \n> \n> It is shown in the context of a checkpoint record, so I think it is\n> not needed or rather lengthning the dump line uselessly. \n> \n> \n> \n> +++ b/src/backend/access/transam/xlog.c\n> -\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request %X/%X, current position %X/%X\",\n> +\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request lsn %X/%X, current lsn %X/%X\",\n> \n> \n> \n> +++ b/src/backend/replication/walsender.c\n> -\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush position of this server %X/%X\",\n> +\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush LSN of this server %X/%X\",\n> \n> \n> \n> \"WAL\" is upper-cased. So it seems rather strange that the \"lsn\" is\n> lower-cased. In the first place the message doesn't look like a\n> user-facing error message and I feel we don't need position or lsn\n> there..\n> \n> \n> \n> +++ b/src/bin/pg_rewind/pg_rewind.c\n> -\t\tpg_log_info(\"servers diverged at WAL location %X/%X on timeline %u\",\n> +\t\tpg_log_info(\"servers diverged at WAL LSN %X/%X on timeline %u\",\n> \n> \n> \n> I feel that we don't need \"WAL\" there.\n> \n> \n> \n> +++ b/src/bin/pg_waldump/pg_waldump.c\n> -\tprintf(_(\" -e, --end=RECPTR stop reading at WAL location RECPTR\\n\"));\n> +\tprintf(_(\" -e, --end=RECPTR stop reading at WAL LSN RECPTR\\n\"));\n> \n> \n> \n> Mmm.. \"WAL LSN RECPTR\" looks strange to me. In the first place I\n> don't think \"RECPTR\" is a user-facing term. Doesn't something like the\n> follows work?\n> \n> \n> \n> +\tprintf(_(\" -e, --end=WAL-LSN stop reading at WAL-LSN\\n\"));\n> \n> \n> \n> In some changes in this patch shorten the main message text of\n> fprintf-ish functions. That makes the succeeding parameters can be\n> inlined.\n\n\n> > 0005: Unhyphenate the word archive-recovery and similars.\n> \n> 0005 - How about replacing \"crash-recovery\" to \"crash recovery\" in\n> postgres-ref.sgml too?\n\nOh, that's a left-over. Fixed. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 15 Mar 2022 17:23:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "On Tue, Mar 15, 2022 at 05:23:40PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 15 Mar 2022 12:19:47 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n>> On Fri, Mar 4, 2022 at 10:40 AM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> 0001 - I don't think you need to do this as UpdateControlFile\n>> (update_controlfile) will anyway update it, no?\n>> + /* Update control file using current time */\n>> + ControlFile->time = (pg_time_t) time(NULL);\n> \n> Ugh.. Yes. It is a copy-pasto from older versions. They may have the\n> same copy-pasto..\n\nThis thread has shifted to an entirely different discussion,\npresenting patches that touch code paths unrelated to what was first\nstated. Shouldn't you create a new thread with a proper $subject to\nattract a more correct audience? \n--\nMichael", "msg_date": "Tue, 15 Mar 2022 18:26:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Tue, 15 Mar 2022 18:26:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 15, 2022 at 05:23:40PM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 15 Mar 2022 12:19:47 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> >> On Fri, Mar 4, 2022 at 10:40 AM Kyotaro Horiguchi\n> >> <horikyota.ntt@gmail.com> wrote:\n> >> 0001 - I don't think you need to do this as UpdateControlFile\n> >> (update_controlfile) will anyway update it, no?\n> >> + /* Update control file using current time */\n> >> + ControlFile->time = (pg_time_t) time(NULL);\n> > \n> > Ugh.. Yes. It is a copy-pasto from older versions. They may have the\n> > same copy-pasto..\n> \n> This thread has shifted to an entirely different discussion,\n> presenting patches that touch code paths unrelated to what was first\n> stated. Shouldn't you create a new thread with a proper $subject to\n> attract a more correct audience? \n\nI felt the same since some messages ago. I thought Fujii-san thought\nthat he wants to fix the CreateRestartPoint issue before the\ncheckpoint log patch but he looks like busy these days.\n\nSince the CreateRestartPoint issue is orthogonal to the main patch,\nI'll separate that part into anther thread.\n\nThis thread is discussing one other topic, wordings in user-facing\ntexts. This is also orthogonal (3-dimentionally?) to the two topics.\n\nIn short, I split out the two topics other than checkpoint log to\nother threads.\n\nThanks for cueing me to do that!\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Mar 2022 09:19:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Wed, 16 Mar 2022 09:19:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> In short, I split out the two topics other than checkpoint log to\n> other threads.\n\nSo, this is about the main topic of this thread, adding LSNs to\ncheckpint log. Other topics have moved to other treads [1], [2] ,\n[3].\n\nI think this is no longer controversial alone. So this patch is now\nreally Read-for-Commiter and is waiting to be picked up.\n\nregards.\n\n\n[1] https://www.postgresql.org/message-id/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com\n[2] https://www.postgresql.org/message-id/20220316.102900.2003692961119672246.horikyota.ntt%40gmail.com\n[3] https://www.postgresql.org/message-id/20220316.102509.785466054344164656.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 16 Mar 2022 10:29:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "This patch is currently showing up with a test failure in the CFBot\nhowever I do *not* believe this is a bug in the patch. I think it's a\nbug in that test which is being discussed elsewhere.\n\nIt's also a very short and straightforward patch that a committer\ncould probably make a decision about whether it's a good idea or not\nand then apply it quickly if so.\n\nJust to give people a leg up and an idea how short the patch is...\nHere's the entire patch:\n\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex ed16f279b1..b85c76d8f8 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -6121,7 +6121,8 @@ LogCheckpointEnd(bool restartpoint)\n \"%d WAL file(s) added, %d removed, %d recycled; \"\n \"write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; \"\n \"sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; \"\n- \"distance=%d kB, estimate=%d kB\",\n+ \"distance=%d kB, estimate=%d kB; \"\n+ \"lsn=%X/%X, redo lsn=%X/%X\",\n CheckpointStats.ckpt_bufs_written,\n (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,\n CheckpointStats.ckpt_segs_added,\n@@ -6134,14 +6135,21 @@ LogCheckpointEnd(bool restartpoint)\n longest_msecs / 1000, (int) (longest_msecs % 1000),\n average_msecs / 1000, (int) (average_msecs % 1000),\n (int) (PrevCheckPointDistance / 1024.0),\n- (int) (CheckPointDistanceEstimate / 1024.0))));\n+ (int) (CheckPointDistanceEstimate / 1024.0),\n+ /*\n+ * ControlFileLock is not required as we are the only\n+ * updator of these variables.\n+ */\n+ LSN_FORMAT_ARGS(ControlFile->checkPoint),\n+ LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo))));\n else\n ereport(LOG,\n (errmsg(\"checkpoint complete: wrote %d buffers (%.1f%%); \"\n \"%d WAL file(s) added, %d removed, %d recycled; \"\n \"write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; \"\n \"sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; \"\n- \"distance=%d kB, estimate=%d kB\",\n+ \"distance=%d kB, estimate=%d kB; \"\n+ \"lsn=%X/%X, redo lsn=%X/%X\",\n CheckpointStats.ckpt_bufs_written,\n (double) CheckpointStats.ckpt_bufs_written * 100 / NBuffers,\n CheckpointStats.ckpt_segs_added,\n@@ -6154,7 +6162,13 @@ LogCheckpointEnd(bool restartpoint)\n longest_msecs / 1000, (int) (longest_msecs % 1000),\n average_msecs / 1000, (int) (average_msecs % 1000),\n (int) (PrevCheckPointDistance / 1024.0),\n- (int) (CheckPointDistanceEstimate / 1024.0))));\n+ (int) (CheckPointDistanceEstimate / 1024.0),\n+ /*\n+ * ControlFileLock is not required as we are the only\n+ * updator of these variables.\n+ */\n+ LSN_FORMAT_ARGS(ControlFile->checkPoint),\n+ LSN_FORMAT_ARGS(ControlFile->checkPointCopy.redo))));\n }\n\n /*\n-- \n2.27.0\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:31:16 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/03/16 10:29, Kyotaro Horiguchi wrote:\n> At Wed, 16 Mar 2022 09:19:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> In short, I split out the two topics other than checkpoint log to\n>> other threads.\n> \n> So, this is about the main topic of this thread, adding LSNs to\n> checkpint log. Other topics have moved to other treads [1], [2] ,\n> [3].\n> \n> I think this is no longer controversial alone. So this patch is now\n> really Read-for-Commiter and is waiting to be picked up.\n\n+1\n\n+\t\t\t\t\t\t * ControlFileLock is not required as we are the only\n+\t\t\t\t\t\t * updator of these variables.\n\nIsn't it better to add \"at this time\" or something at the end of the comment because only we're not always updator of them? No?\n\nBarring any objection, I'm thinking to apply the above small change and commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 Jul 2022 01:11:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "At Thu, 7 Jul 2022 01:11:33 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/03/16 10:29, Kyotaro Horiguchi wrote:\n> > At Wed, 16 Mar 2022 09:19:13 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >> In short, I split out the two topics other than checkpoint log to\n> >> other threads.\n> > So, this is about the main topic of this thread, adding LSNs to\n> > checkpint log. Other topics have moved to other treads [1], [2] ,\n> > [3].\n> > I think this is no longer controversial alone. So this patch is now\n> > really Read-for-Commiter and is waiting to be picked up.\n> \n> +1\n> \n> + * ControlFileLock is not required as we are the only\n> +\t\t\t\t\t\t * updator of these variables.\n> \n> Isn't it better to add \"at this time\" or something at the end of the\n> comment because only we're not always updator of them? No?\n\nExcluding initialization, (I believe) checkpointer is really the only\nupdator of the variables/members. But I'm fine with the addition.\n\n> Barring any objection, I'm thinking to apply the above small change\n> and commit the patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Jul 2022 16:26:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" }, { "msg_contents": "\n\nOn 2022/07/07 16:26, Kyotaro Horiguchi wrote:\n>> + * ControlFileLock is not required as we are the only\n>> +\t\t\t\t\t\t * updator of these variables.\n>>\n>> Isn't it better to add \"at this time\" or something at the end of the\n>> comment because only we're not always updator of them? No?\n> \n> Excluding initialization, (I believe) checkpointer is really the only\n> updator of the variables/members. But I'm fine with the addition.\n\nOk, so I modified the patch slightly and pushed it. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 Jul 2022 22:45:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message" } ]
[ { "msg_contents": "Hi,\n\nCurrently the documentation of the log_checkpoints GUC says the following:\n\n Some statistics are included in the log messages, including the number\n of buffers written and the time spent writing them.\n\nUsage of the word \"Some\" makes it a vague statement. Why can't we just\nbe clear about what statistics the log_checkpoints GUC can emit, like\nthe attached patch?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 23 Dec 2021 20:56:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Be clear about what log_checkpoints emits in the documentation" }, { "msg_contents": "At Thu, 23 Dec 2021 20:56:22 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> Currently the documentation of the log_checkpoints GUC says the following:\n> \n> Some statistics are included in the log messages, including the number\n> of buffers written and the time spent writing them.\n> \n> Usage of the word \"Some\" makes it a vague statement. Why can't we just\n> be clear about what statistics the log_checkpoints GUC can emit, like\n> the attached patch?\n> \n> Thoughts?\n\nIt seems like simply a maintenance burden of documentation since it\ndoesn't add any further detail of any item in a checkpoint log\nmessage. But I'm not sure we want detailed explanations for them and\nit seems to me we deliberately never explained the detail of any log\nmessages.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Dec 2021 15:00:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Be clear about what log_checkpoints emits in the documentation" }, { "msg_contents": "On Fri, Dec 24, 2021 at 11:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 23 Dec 2021 20:56:22 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Hi,\n> >\n> > Currently the documentation of the log_checkpoints GUC says the following:\n> >\n> > Some statistics are included in the log messages, including the number\n> > of buffers written and the time spent writing them.\n> >\n> > Usage of the word \"Some\" makes it a vague statement. Why can't we just\n> > be clear about what statistics the log_checkpoints GUC can emit, like\n> > the attached patch?\n> >\n> > Thoughts?\n>\n> It seems like simply a maintenance burden of documentation since it\n> doesn't add any further detail of any item in a checkpoint log\n> message. But I'm not sure we want detailed explanations for them and\n> it seems to me we deliberately never explained the detail of any log\n> messages.\n\nAgreed. \"Some statistics\" can cover whatever existing and newly added\nstuff, if any. Let it be that way.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 24 Dec 2021 18:01:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Be clear about what log_checkpoints emits in the documentation" } ]
[ { "msg_contents": "The most recent cfbot run for a patch I am interested in has failed a\nnewly added regression test.\n\nPlease see http://cfbot.cputube.org/ for 36/2906\n\n~\n\nThe failure logs [2] are very curious because the error message is\nwhat was expected but it has a different position of the ellipsis\n(...).\n\nBut only for Windows.\n\n -- fail - publication WHERE clause must be boolean\n ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n ERROR: argument of PUBLICATION WHERE must be type boolean, not type integer\n-LINE 1: ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (...\n+LINE 1: ...PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n\n\nHow is that possible?\n\nIs this a problem caused by the patch code? If so, how?\n\nOr is this some obscure boundary case bug of the error ellipsis\ncalculation which I've exposed by accident due to the specific length\nof my bad command?\n\nThanks for any ideas!\n\n------\n[2] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.157408\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 Dec 2021 11:41:47 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Inconsistent ellipsis in regression test error message?" }, { "msg_contents": "On Fri, Dec 24, 2021 at 11:41:47AM +1100, Peter Smith wrote:\n> The most recent cfbot run for a patch I am interested in has failed a\n> newly added regression test.\n> \n> Please see http://cfbot.cputube.org/ for 36/2906\n> \n> The failure logs [2] are very curious because the error message is\n> what was expected but it has a different position of the ellipsis\n> (...).\n> \n> But only for Windows.\n\nI reproduced the diff under linux with:\ntime EXTRA_REGRESS_OPTS=\"--encoding=SQL_ASCII\" make check # --no-locale\n\nThe ellipsis is from reportErrorPosition(). I'm not sure I'll look into this\nmore, though.\n\n> -- fail - publication WHERE clause must be boolean\n> ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n> ERROR: argument of PUBLICATION WHERE must be type boolean, not type integer\n> -LINE 1: ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (...\n> +LINE 1: ...PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n\n> Or is this some obscure boundary case bug of the error ellipsis\n> calculation which I've exposed by accident due to the specific length\n> of my bad command?\n\n\n", "msg_date": "Sun, 26 Dec 2021 09:13:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent ellipsis in regression test error message?" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> The most recent cfbot run for a patch I am interested in has failed a\n> newly added regression test.\n> Please see http://cfbot.cputube.org/ for 36/2906\n> The failure logs [2] are very curious because the error message is\n> what was expected but it has a different position of the ellipsis\n\nThat \"expected\" output is clearly completely insane; it's pointing\nthe cursor in the middle of the \"TABLE\" keyword, not at the offending\nconstant. I can reproduce that when the database encoding is UTF8,\nbut if it's SQL_ASCII or a single-byte encoding then I get a saner result:\n\nregression=# ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\nERROR: argument of PUBLICATION WHERE must be type boolean, not type integer\nLINE 1: ...PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n ^\n\nThis is not a client-side problem: the error position being reported\nby the server is different, as you can easily see in the server's log:\n\n2021-12-27 12:05:15.395 EST [1510837] ERROR: argument of PUBLICATION WHERE must be type boolean, not type integer at character 33\n2021-12-27 12:05:15.395 EST [1510837] STATEMENT: ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n\n(it says \"at character 61\" in the sane case).\n\nI traced this as far as finding that the pstate being passed to\ncoerce_to_boolean has a totally wrong p_sourcetext:\n\n(gdb) p *pstate\n$3 = {parentParseState = 0x0, \n p_sourcetext = 0x1fba9e8 \"{A_CONST :val 1234 :location 60}\", \n p_rtable = 0x2063ce0, p_joinexprs = 0x0, p_joinlist = 0x0, \n p_namespace = 0x2063dc8, p_lateral_active = false, p_ctenamespace = 0x0, \n p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0, \n p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, \n p_expr_kind = EXPR_KIND_NONE, p_next_resno = 1, p_multiassign_exprs = 0x0, \n p_locking_clause = 0x0, p_locked_from_parent = false, \n p_resolve_unknowns = true, p_queryEnv = 0x0, p_hasAggs = false, \n p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false, \n p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, \n p_post_columnref_hook = 0x0, p_paramref_hook = 0x0, \n p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}\n\nIn short, GetTransformedWhereClause is inserting completely faulty data in\np_sourcetext. This code needs to be revised to pass down the original\ncommand string, or maybe better pass down the whole ParseState that was\navailable to AlterPublication, instead of inventing a bogus one.\n\nThe reason why the behavior depends on DB encoding is left as an\nexercise for the student.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Dec 2021 12:34:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent ellipsis in regression test error message?" }, { "msg_contents": "On Tue, Dec 28, 2021 at 4:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > The most recent cfbot run for a patch I am interested in has failed a\n> > newly added regression test.\n> > Please see http://cfbot.cputube.org/ for 36/2906\n> > The failure logs [2] are very curious because the error message is\n> > what was expected but it has a different position of the ellipsis\n>\n> That \"expected\" output is clearly completely insane; it's pointing\n> the cursor in the middle of the \"TABLE\" keyword, not at the offending\n> constant. I can reproduce that when the database encoding is UTF8,\n> but if it's SQL_ASCII or a single-byte encoding then I get a saner result:\n>\n> regression=# ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n> ERROR: argument of PUBLICATION WHERE must be type boolean, not type integer\n> LINE 1: ...PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n> ^\n>\n> This is not a client-side problem: the error position being reported\n> by the server is different, as you can easily see in the server's log:\n>\n> 2021-12-27 12:05:15.395 EST [1510837] ERROR: argument of PUBLICATION WHERE must be type boolean, not type integer at character 33\n> 2021-12-27 12:05:15.395 EST [1510837] STATEMENT: ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (1234);\n>\n> (it says \"at character 61\" in the sane case).\n>\n> I traced this as far as finding that the pstate being passed to\n> coerce_to_boolean has a totally wrong p_sourcetext:\n>\n> (gdb) p *pstate\n> $3 = {parentParseState = 0x0,\n> p_sourcetext = 0x1fba9e8 \"{A_CONST :val 1234 :location 60}\",\n> p_rtable = 0x2063ce0, p_joinexprs = 0x0, p_joinlist = 0x0,\n> p_namespace = 0x2063dc8, p_lateral_active = false, p_ctenamespace = 0x0,\n> p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0,\n> p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0,\n> p_expr_kind = EXPR_KIND_NONE, p_next_resno = 1, p_multiassign_exprs = 0x0,\n> p_locking_clause = 0x0, p_locked_from_parent = false,\n> p_resolve_unknowns = true, p_queryEnv = 0x0, p_hasAggs = false,\n> p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false,\n> p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0,\n> p_post_columnref_hook = 0x0, p_paramref_hook = 0x0,\n> p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}\n>\n> In short, GetTransformedWhereClause is inserting completely faulty data in\n> p_sourcetext. This code needs to be revised to pass down the original\n> command string, or maybe better pass down the whole ParseState that was\n> available to AlterPublication, instead of inventing a bogus one.\n>\n> The reason why the behavior depends on DB encoding is left as an\n> exercise for the student.\n>\n\nThanks for the information, and sorry for taking up your time tracing\nwhat ended up being our bug after all...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 Jan 2022 17:03:18 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent ellipsis in regression test error message?" } ]
[ { "msg_contents": "From 53.2.9. SSL Session Encryption:\n\n\n> When SSL encryption can be performed, the server is expected to send only\n> the single S byte and then wait for the frontend to initiate an SSL\n> handshake. If additional bytes are available to read at this point, it\n> likely means that a man-in-the-middle is attempting to perform a\n> buffer-stuffing attack (CVE-2021-23222). Frontends should be coded either\n> to read exactly one byte from the socket before turning the socket over to\n> their SSL library, or to treat it as a protocol violation if they find they\n> have read additional bytes.\n\n\n> An initial SSLRequest can also be used in a connection that is being\n> opened to send a CancelRequest message.\n\n\n> While the protocol itself does not provide a way for the server to force\n> SSL encryption, the administrator can configure the server to reject\n> unencrypted sessions as a byproduct of authentication checking.\n\n\nHas consideration been given to having something like ssl-mode=tls-only\nwhere the SSLRequest message is skipped and the TLS handshake starts\nimmediately with the protocol continuing after that?\n\nThis has several advantages:\n1) One less round-trip when establishing the connection, speeding this up.\nTLS 1.3 removed a round-trip and this was significant so it could help.\n2) No possibility of downgrading to non-TLS. (Not sure this is an issue\nthough.)\n3) Connections work through normal TLS proxies and SNI can be used for\nrouting.\n\nThis final advantage is the main reason I'd like to see this implemented.\nPostgres is increasingly being run in multi-tenant Kubernetes clusters\nwhere load-balancers and node ports are not available, cost or don't scale\nand it is currently difficult to connect to databases running there. If it\nwas possible to use normal ingress TLS proxies, running Postgres in\nKubernetes would be much easier and there are other use cases where just\nusing TLS would be beneficial as well.\n\nQuestions about TLS SNI support have been asked for several years now and\nthis would be a big usability improvement. SNI support was recently added\nto the client-side and this seems like the next logical step.\n\nThoughts?\n\n - Keith\n\nFrom 53.2.9. SSL Session Encryption: When SSL encryption can be performed, the server is expected to send only the single S byte and then wait for the frontend to initiate an SSL handshake. If additional bytes are available to read at this point, it likely means that a man-in-the-middle is attempting to perform a buffer-stuffing attack (CVE-2021-23222). Frontends should be coded either to read exactly one byte from the socket before turning the socket over to their SSL library, or to treat it as a protocol violation if they find they have read additional bytes.An initial SSLRequest can also be used in a connection that is being opened to send a CancelRequest message.While the protocol itself does not provide a way for the server to force SSL encryption, the administrator can configure the server to reject unencrypted sessions as a byproduct of authentication checking.Has consideration been given to having something like ssl-mode=tls-only where the SSLRequest message is skipped and the TLS handshake starts immediately with the protocol continuing after that?This has several advantages:1) One less round-trip when establishing the connection, speeding this up. TLS 1.3 removed a round-trip and this was significant so it could help.2) No possibility of downgrading to non-TLS. (Not sure this is an issue though.)3) Connections work through normal TLS proxies and SNI can be used for routing.  This final advantage is the main reason I'd like to see this implemented. Postgres is increasingly being run in multi-tenant Kubernetes clusters where load-balancers and node ports are not available, cost or don't scale and it is currently difficult to connect to databases running there. If it was possible to use normal ingress TLS proxies, running Postgres in Kubernetes would be much easier and there are other use cases where just using TLS would be beneficial as well.Questions about TLS SNI support have been asked for several years now and this would be a big usability improvement. SNI support was recently added to the client-side and this seems like the next logical step. Thoughts?  - Keith", "msg_date": "Fri, 24 Dec 2021 14:08:05 +0000", "msg_from": "Keith Burdis <keith@burdis.org>", "msg_from_op": true, "msg_subject": "Proposal: sslmode=tls-only" }, { "msg_contents": "\nOn 12/24/21 09:08, Keith Burdis wrote:\n> From 53.2.9. SSL Session Encryption:\n>  \n>\n> When SSL encryption can be performed, the server is expected to\n> send only the single S byte and then wait for the frontend to\n> initiate an SSL handshake. If additional bytes are available to\n> read at this point, it likely means that a man-in-the-middle is\n> attempting to perform a buffer-stuffing attack (CVE-2021-23222).\n> Frontends should be coded either to read exactly one byte from the\n> socket before turning the socket over to their SSL library, or to\n> treat it as a protocol violation if they find they have read\n> additional bytes.\n>\n>\n> An initial SSLRequest can also be used in a connection that is\n> being opened to send a CancelRequest message.\n>\n>\n> While the protocol itself does not provide a way for the server to\n> force SSL encryption, the administrator can configure the server\n> to reject unencrypted sessions as a byproduct of authentication\n> checking.\n>\n>\n> Has consideration been given to having something like\n> ssl-mode=tls-only where the SSLRequest message is skipped and the TLS\n> handshake starts immediately with the protocol continuing after that?\n>\n> This has several advantages:\n> 1) One less round-trip when establishing the connection, speeding this\n> up. TLS 1.3 removed a round-trip and this was significant so it could\n> help.\n> 2) No possibility of downgrading to non-TLS. (Not sure this is an\n> issue though.)\n> 3) Connections work through normal TLS proxies and SNI can be used for\n> routing.  \n>\n> This final advantage is the main reason I'd like to see this\n> implemented. Postgres is increasingly being run in multi-tenant\n> Kubernetes clusters where load-balancers and node ports are not\n> available, cost or don't scale and it is currently difficult to\n> connect to databases running there. If it was possible to use normal\n> ingress TLS proxies, running Postgres in Kubernetes would be much\n> easier and there are other use cases where just using TLS would be\n> beneficial as well.\n>\n> Questions about TLS SNI support have been asked for several years now\n> and this would be a big usability improvement. SNI support was\n> recently added to the client-side and this seems like the next logical\n> step. \n>\n> Thoughts?\n>\n>\n\nIsn't that going to break every existing client? How is a client\nsupposed to know which protocol to follow?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Dec 2021 14:16:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: sslmode=tls-only" }, { "msg_contents": "Servers that use sslmode=tls-only would not be compatible with clients\nthat do not yet support it, but that is same for any similar server-side\nchange, for example if the server requires a minimum of TLS 1.3 but the\nclient only supports TLS 1.2.\n\nIIUC with the default sslmode=prefer a client currently first tries to\nconnect using SSL by sending an SSLRequest and if the response is not an S\nthen it tries with a plain StartupMessage. If we wanted this new mode to\nwork by default as well, this could be changed to try a TLS handshake after\nthe SSLRequest and StartupMessage. Similarly for the other existing\nsslmode values an SSLRequest could be sent and if that fails then a TLS\nhandshake could be initiated.\n\nBut this is \"only provided as the default for backwards compatibility, and\nis not recommended in secure deployments\". I'd expect that most secure\ndeployments set an explicit sslmode=verify-full, so having them set an\nexplicit sslmode=tls-only would not be an issue once the client and server\nsupport it.\n\nThere is already precedent for having clients handle something new [1]:\"The\nfrontend should also be prepared to handle an ErrorMessage response to\nSSLRequest from the server. This would only occur if the server predates\nthe addition of SSL support to PostgreSQL. (Such servers are now very\nancient, and likely do not exist in the wild anymore.) In this case the\nconnection must be closed, but the frontend might choose to open a fresh\nconnection and proceed without requesting SSL.\"\n\n[1] https://www.postgresql.org/docs/14/protocol-flow.html#id-1.10.5.7.11\n\nOn Fri, 24 Dec 2021 at 19:16, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 12/24/21 09:08, Keith Burdis wrote:\n> > From 53.2.9. SSL Session Encryption:\n> >\n> >\n> > When SSL encryption can be performed, the server is expected to\n> > send only the single S byte and then wait for the frontend to\n> > initiate an SSL handshake. If additional bytes are available to\n> > read at this point, it likely means that a man-in-the-middle is\n> > attempting to perform a buffer-stuffing attack (CVE-2021-23222).\n> > Frontends should be coded either to read exactly one byte from the\n> > socket before turning the socket over to their SSL library, or to\n> > treat it as a protocol violation if they find they have read\n> > additional bytes.\n> >\n> >\n> > An initial SSLRequest can also be used in a connection that is\n> > being opened to send a CancelRequest message.\n> >\n> >\n> > While the protocol itself does not provide a way for the server to\n> > force SSL encryption, the administrator can configure the server\n> > to reject unencrypted sessions as a byproduct of authentication\n> > checking.\n> >\n> >\n> > Has consideration been given to having something like\n> > ssl-mode=tls-only where the SSLRequest message is skipped and the TLS\n> > handshake starts immediately with the protocol continuing after that?\n> >\n> > This has several advantages:\n> > 1) One less round-trip when establishing the connection, speeding this\n> > up. TLS 1.3 removed a round-trip and this was significant so it could\n> > help.\n> > 2) No possibility of downgrading to non-TLS. (Not sure this is an\n> > issue though.)\n> > 3) Connections work through normal TLS proxies and SNI can be used for\n> > routing.\n> >\n> > This final advantage is the main reason I'd like to see this\n> > implemented. Postgres is increasingly being run in multi-tenant\n> > Kubernetes clusters where load-balancers and node ports are not\n> > available, cost or don't scale and it is currently difficult to\n> > connect to databases running there. If it was possible to use normal\n> > ingress TLS proxies, running Postgres in Kubernetes would be much\n> > easier and there are other use cases where just using TLS would be\n> > beneficial as well.\n> >\n> > Questions about TLS SNI support have been asked for several years now\n> > and this would be a big usability improvement. SNI support was\n> > recently added to the client-side and this seems like the next logical\n> > step.\n> >\n> > Thoughts?\n> >\n> >\n>\n> Isn't that going to break every existing client? How is a client\n> supposed to know which protocol to follow?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n Servers that use sslmode=tls-only would not be compatible with clients that do not yet support it, but that is same for any similar server-side change, for example if the server requires a minimum of TLS 1.3 but the client only supports TLS 1.2.IIUC with the default sslmode=prefer a client currently first tries to connect using SSL by sending an SSLRequest and if the response is not an S then it tries with a plain StartupMessage.  If we wanted this new mode to work by default as well, this could be changed to try a TLS handshake after the SSLRequest and StartupMessage.  Similarly for the other existing sslmode values an SSLRequest could be sent and if that fails then a TLS handshake could be initiated. But this is \"only provided as the default for backwards compatibility, and is not recommended in secure deployments\". I'd expect that most secure deployments set an explicit sslmode=verify-full, so having them set an explicit sslmode=tls-only would not be an issue once the client and server support it.There is already precedent for having clients handle something new [1]:\"The frontend should also be prepared to handle an ErrorMessage response to SSLRequest from the server. This would only occur if the server predates the addition of SSL support to PostgreSQL. (Such servers are now very ancient, and likely do not exist in the wild anymore.) In this case the connection must be closed, but the frontend might choose to open a fresh connection and proceed without requesting SSL.\"[1] https://www.postgresql.org/docs/14/protocol-flow.html#id-1.10.5.7.11On Fri, 24 Dec 2021 at 19:16, Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 12/24/21 09:08, Keith Burdis wrote:\n> From 53.2.9. SSL Session Encryption:\n>  \n>\n>     When SSL encryption can be performed, the server is expected to\n>     send only the single S byte and then wait for the frontend to\n>     initiate an SSL handshake. If additional bytes are available to\n>     read at this point, it likely means that a man-in-the-middle is\n>     attempting to perform a buffer-stuffing attack (CVE-2021-23222).\n>     Frontends should be coded either to read exactly one byte from the\n>     socket before turning the socket over to their SSL library, or to\n>     treat it as a protocol violation if they find they have read\n>     additional bytes.\n>\n>\n>     An initial SSLRequest can also be used in a connection that is\n>     being opened to send a CancelRequest message.\n>\n>\n>     While the protocol itself does not provide a way for the server to\n>     force SSL encryption, the administrator can configure the server\n>     to reject unencrypted sessions as a byproduct of authentication\n>     checking.\n>\n>\n> Has consideration been given to having something like\n> ssl-mode=tls-only where the SSLRequest message is skipped and the TLS\n> handshake starts immediately with the protocol continuing after that?\n>\n> This has several advantages:\n> 1) One less round-trip when establishing the connection, speeding this\n> up. TLS 1.3 removed a round-trip and this was significant so it could\n> help.\n> 2) No possibility of downgrading to non-TLS. (Not sure this is an\n> issue though.)\n> 3) Connections work through normal TLS proxies and SNI can be used for\n> routing.  \n>\n> This final advantage is the main reason I'd like to see this\n> implemented. Postgres is increasingly being run in multi-tenant\n> Kubernetes clusters where load-balancers and node ports are not\n> available, cost or don't scale and it is currently difficult to\n> connect to databases running there. If it was possible to use normal\n> ingress TLS proxies, running Postgres in Kubernetes would be much\n> easier and there are other use cases where just using TLS would be\n> beneficial as well.\n>\n> Questions about TLS SNI support have been asked for several years now\n> and this would be a big usability improvement. SNI support was\n> recently added to the client-side and this seems like the next logical\n> step. \n>\n> Thoughts?\n>\n>\n\nIsn't that going to break every existing client? How is a client\nsupposed to know which protocol to follow?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 24 Dec 2021 20:25:54 +0000", "msg_from": "Keith Burdis <keith@burdis.org>", "msg_from_op": true, "msg_subject": "Re: Proposal: sslmode=tls-only" }, { "msg_contents": "Keith Burdis <keith@burdis.org> writes:\n> Has consideration been given to having something like ssl-mode=tls-only\n> where the SSLRequest message is skipped and the TLS handshake starts\n> immediately with the protocol continuing after that?\n\nhttps://www.postgresql.org/message-id/flat/fcc3ebeb7f05775b63f3207ed52a54ea5d17fb42.camel%40vmware.com\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Dec 2021 17:05:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: sslmode=tls-only" }, { "msg_contents": "On Fri, 2021-12-24 at 14:08 +0000, Keith Burdis wrote:\r\n> Has consideration been given to having something like ssl-mode=tls-\r\n> only where the SSLRequest message is skipped and the TLS handshake\r\n> starts immediately with the protocol continuing after that?\r\n\r\nFrom an implementation standpoint, I think I'd prefer to keep sslmode\r\nindependent from the new implicit-TLS setting, so that any existing\r\ndeployments can migrate to the new handshake without needing to change\r\ntheir certificate setup. (That said, any sslmodes weaker than `require`\r\nwould be incompatible with the new setting.)\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 3 Jan 2022 17:24:19 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: sslmode=tls-only" } ]
[ { "msg_contents": "Hey!\n\nPostgres 14 introduces an option to create a GiST index using a sort\nmethod. It allows to create indexes much faster but as it had been\nmentioned in sort support patch discussion the faster build performance\ncomes at cost of higher degree of overlap between pages than for indexes\nbuilt with regular method.\n\n\nSort support was implemented for GiST opclass in PostGIS but eventually got\nremoved as default behaviour in latest 3.2 release because as it had been\ndiscovered by Paul Ramsey\nhttps://lists.osgeo.org/pipermail/postgis-devel/2021-November/029225.html\nperformance of queries might degrade by 50%.\n\nTogether with Darafei Praliaskouski, Andrey Borodin and me we tried several\napproaches to solve query performance degrade:\n\n - The first attempt was try to decide whether to make a split depending\n on direction of curve (Z-curve for Postgres geometry type, Hilbert curve\n for PostGIS). It was implemented by filling page until fillfactor / 2 and\n then checking penalty for every next item and keep inserting in current\n page if penalty is 0 or start new page if penalty is not 0. It turned out\n that with this approach index becomes significantly larger whereas pages\n overlap still remains high.\n - Andrey Borodin implemented LRU + split: a fixed amount of pages are\n kept in memory and the best candidate page to insert the next item in is\n selected by minimum penalty among these pages. If the best page for\n insertion is full, it gets splited into multiple pages, and if the amount\n of candidate pages after split exceeds the limit, the pages insertion to\n which has not happened recently are flushed.\n https://github.com/x4m/postgres_g/commit/0f2ed5f32a00f6c3019048e0c145b7ebda629e73.\n We made some tests and while query performance speed using index built with\n this approach is fine a size of index is extremely large.\n\nEventually we made implementation of an idea outlined in sort support patch\ndiscussion here\nhttps://www.postgresql.org/message-id/flat/08173bd0-488d-da76-a904-912c35da446b@iki.fi#09ac9751a4cde897c99b99b2170faf3a\nthat several pages can be collected and then divided into actual index\npages by calling picksplit. My benchmarks with data provided in\npostgis-devel show that query performance using index built with patched\nsort support is comparable with performance of query using index built with\nregular method. The size of index is also matches size of index built with\nnon-sorting method.\n\n\nIt should be noted that with the current implementation of the sorting\nbuild method, pages are always filled up to fillfactor. This patch changes\nthis behavior to what it would be if using a non-sorting method, and pages\nare not always filled to fillfactor for the sake of query performance. I'm\ninterested in improving it and I wonder if there are any ideas on this.\n\n\nBenchmark summary:\n\n\ncreate index roads_rdr_idx on roads_rdr using gist (geom);\n\n\nwith sort support before patch / CREATE INDEX 76.709 ms\n\nwith sort support after patch / CREATE INDEX 225.238 ms\n\nwithout sort support / CREATE INDEX 446.071 ms\n\n\nselect count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n\n\nwith sort support before patch / SELECT 5766.526 ms\n\nwith sort support after patch / SELECT 2646.554 ms\n\nwithout sort support / SELECT 2721.718 ms\n\n\nindex size\n\n\nwith sort support before patch / IDXSIZE 2940928 bytes\n\nwith sort support after patch / IDXSIZE 4956160 bytes\n\nwithout sort support / IDXSIZE 5447680 bytes\n\nMore detailed:\n\nBefore patch using sorted method:\n\n\npostgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using\ngist(geom);\n\nCREATE INDEX\n\nTime: 76.709 ms\n\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 5766.526 ms (00:05.767)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 5880.142 ms (00:05.880)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 5778.437 ms (00:05.778)\n\n\npostgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');\n\n gist_stat\n\n------------------------------------------\n\n Number of levels: 3 +\n\n Number of pages: 359 +\n\n Number of leaf pages: 356 +\n\n Number of tuples: 93034 +\n\n Number of invalid tuples: 0 +\n\n Number of leaf tuples: 92676 +\n\n Total size of tuples: 2609260 bytes+\n\n Total size of leaf tuples: 2599200 bytes+\n\n Total size of index: 2940928 bytes+\n\n\n\n(1 row)\n\nAfter patch using sorted method:\n\npostgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using\ngist(geom);\n\nCREATE INDEX\n\nTime: 225.238 ms\n\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 2646.554 ms (00:02.647)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 2499.107 ms (00:02.499)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 2519.815 ms (00:02.520)\n\n\npostgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');\n\n gist_stat\n\n------------------------------------------\n\n Number of levels: 3 +\n\n Number of pages: 605 +\n\n Number of leaf pages: 600 +\n\n Number of tuples: 93280 +\n\n Number of invalid tuples: 0 +\n\n Number of leaf tuples: 92676 +\n\n Total size of tuples: 2619100 bytes+\n\n Total size of leaf tuples: 2602128 bytes+\n\n Total size of index: 4956160 bytes+\n\n\n\n(1 row)\n\nWith index built using default method:\n\npostgres=# create index roads_rdr_geom_idx_no_sortsupport on roads_rdr\nusing gist(geom);\n\nCREATE INDEX\n\nTime: 446.071 ms\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 2721.718 ms (00:02.722)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\nTime: 3549.549 ms (00:03.550)\n\npostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\nb.geom;\n\n count\n\n--------\n\n 505806\n\n(1 row)\n\n\npostgres=# select gist_stat('roads_rdr_geom_idx_no_sortsupport');\n\n gist_stat\n\n------------------------------------------\n\n Number of levels: 3 +\n\n Number of pages: 665 +\n\n Number of leaf pages: 660 +\n\n Number of tuples: 93340 +\n\n Number of invalid tuples: 0 +\n\n Number of leaf tuples: 92676 +\n\n Total size of tuples: 2621500 bytes+\n\n Total size of leaf tuples: 2602848 bytes+\n\n Total size of index: 5447680 bytes+\n\n\n(1 row)", "msg_date": "Fri, 24 Dec 2021 23:19:36 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "[PATCH] reduce page overlap of GiST indexes built using sorted method" }, { "msg_contents": "\nHi Aliaksandr!\n\nThanks for working on this!\n\n> Benchmark summary:\n> \n> create index roads_rdr_idx on roads_rdr using gist (geom);\n> \n> with sort support before patch / CREATE INDEX 76.709 ms\n> \n> with sort support after patch / CREATE INDEX 225.238 ms\n> \n> without sort support / CREATE INDEX 446.071 ms\n> \n> select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n> \n> with sort support before patch / SELECT 5766.526 ms\n> \n> with sort support after patch / SELECT 2646.554 ms\n> \n> without sort support / SELECT 2721.718 ms\n> \n> index size\n> \n> with sort support before patch / IDXSIZE 2940928 bytes\n> \n> with sort support after patch / IDXSIZE 4956160 bytes\n> \n> without sort support / IDXSIZE 5447680 bytes\n\nThe numbers are impressive, newly build index is actually performing better!\nI've conducted some tests over points, not PostGIS geometry. For points build is 2x slower now, but this is the cost of faster scans.\n\nSome strong points of this index building technology.\nThe proposed algorithm is not randomly chosen as anything that performs better than the original sorting build. We actually researched every idea we knew from literature and intuition. Although we consciously limited the search area by existing GiST API.\nStuff like penalty-based choose-subtree and split in equal halves seriously limit possible solutions. If anyone knows an any better way to build GiST faster or with better scan performance - please let us know.\nThe proposed algorithm contains the current algorithm as a special case: there is a parameter - the number of buffers accumulated before calling Split. If this parameter is 1 proposed algorithm will produce exactly the same output.\n\nAt this stage, we would like to hear some feedback from Postgres and PostGIS community. What other performance aspects should we test?\n\nCurrent patch implementation has some known deficiencies:\n1. We need a GUC instead of the hard-coded buffer of 8 pages.\n2. Is GiST sorting build still deterministic? If not - we should add a fixed random seed into pageinspect tests.\n3. It would make sense to check the resulting indexes with amcheck [0], although it's not committed.\n4. We cannot make an exact fillfactor due to the behavior of picksplit. But can we improve anything here? I think if not - it's still OK.\n5. GistSortedBuildPageState is no more page state. It's Level state or something like that.\n6. The patch desperately needs comments.\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/59D0DA6B-1652-4D44-B0EF-A582D5824F83%40yandex-team.ru\n\n\n", "msg_date": "Sat, 25 Dec 2021 18:35:30 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "After further testing, here is v2 where the issue that rightlink can be set\nwhen an index page is already flushed is fixed.\n\nOn Sat, Dec 25, 2021 at 4:35 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n>\n> Hi Aliaksandr!\n>\n> Thanks for working on this!\n>\n> > Benchmark summary:\n> >\n> > create index roads_rdr_idx on roads_rdr using gist (geom);\n> >\n> > with sort support before patch / CREATE INDEX 76.709 ms\n> >\n> > with sort support after patch / CREATE INDEX 225.238 ms\n> >\n> > without sort support / CREATE INDEX 446.071 ms\n> >\n> > select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n> >\n> > with sort support before patch / SELECT 5766.526 ms\n> >\n> > with sort support after patch / SELECT 2646.554 ms\n> >\n> > without sort support / SELECT 2721.718 ms\n> >\n> > index size\n> >\n> > with sort support before patch / IDXSIZE 2940928 bytes\n> >\n> > with sort support after patch / IDXSIZE 4956160 bytes\n> >\n> > without sort support / IDXSIZE 5447680 bytes\n>\n> The numbers are impressive, newly build index is actually performing\n> better!\n> I've conducted some tests over points, not PostGIS geometry. For points\n> build is 2x slower now, but this is the cost of faster scans.\n>\n> Some strong points of this index building technology.\n> The proposed algorithm is not randomly chosen as anything that performs\n> better than the original sorting build. We actually researched every idea\n> we knew from literature and intuition. Although we consciously limited the\n> search area by existing GiST API.\n> Stuff like penalty-based choose-subtree and split in equal halves\n> seriously limit possible solutions. If anyone knows an any better way to\n> build GiST faster or with better scan performance - please let us know.\n> The proposed algorithm contains the current algorithm as a special case:\n> there is a parameter - the number of buffers accumulated before calling\n> Split. If this parameter is 1 proposed algorithm will produce exactly the\n> same output.\n>\n> At this stage, we would like to hear some feedback from Postgres and\n> PostGIS community. What other performance aspects should we test?\n>\n> Current patch implementation has some known deficiencies:\n> 1. We need a GUC instead of the hard-coded buffer of 8 pages.\n> 2. Is GiST sorting build still deterministic? If not - we should add a\n> fixed random seed into pageinspect tests.\n> 3. It would make sense to check the resulting indexes with amcheck [0],\n> although it's not committed.\n> 4. We cannot make an exact fillfactor due to the behavior of picksplit.\n> But can we improve anything here? I think if not - it's still OK.\n> 5. GistSortedBuildPageState is no more page state. It's Level state or\n> something like that.\n> 6. The patch desperately needs comments.\n>\n>\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n>\n> [0]\n> https://www.postgresql.org/message-id/flat/59D0DA6B-1652-4D44-B0EF-A582D5824F83%40yandex-team.ru\n>", "msg_date": "Sat, 8 Jan 2022 22:20:45 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi,\n\nhere are some benchmark results for GiST patch: \nhttps://gist.github.com/mngr777/88ae200c9c30ba5656583d92e8d2cf9e\n\nCode used for benchmarking: https://github.com/mngr777/pg_index_bm2,\nsee README for the list of test queries.\n\nResults show query performance close to indexes built with no \nsortsupport function, with index creation time reduced approx. by half.\n\nOn 1/8/22 10:20 PM, Aliaksandr Kalenik wrote:\n> After further testing, here is v2 where the issue that rightlink can be \n> set when an index page is already flushed is fixed.\n> \n> On Sat, Dec 25, 2021 at 4:35 PM Andrey Borodin <x4mmm@yandex-team.ru \n> <mailto:x4mmm@yandex-team.ru>> wrote:\n> \n> \n> Hi Aliaksandr!\n> \n> Thanks for working on this!\n> \n> > Benchmark summary:\n> >\n> > create index roads_rdr_idx on roads_rdr using gist (geom);\n> >\n> > with sort support before patch / CREATE INDEX 76.709 ms\n> >\n> > with sort support after patch / CREATE INDEX 225.238 ms\n> >\n> > without sort support / CREATE INDEX 446.071 ms\n> >\n> > select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n> >\n> > with sort support before patch / SELECT 5766.526 ms\n> >\n> > with sort support after patch / SELECT 2646.554 ms\n> >\n> > without sort support / SELECT 2721.718 ms\n> >\n> > index size\n> >\n> > with sort support before patch / IDXSIZE 2940928 bytes\n> >\n> > with sort support after patch / IDXSIZE 4956160 bytes\n> >\n> > without sort support / IDXSIZE 5447680 bytes\n> \n> The numbers are impressive, newly build index is actually performing\n> better!\n> I've conducted some tests over points, not PostGIS geometry. For\n> points build is 2x slower now, but this is the cost of faster scans.\n> \n> Some strong points of this index building technology.\n> The proposed algorithm is not randomly chosen as anything that\n> performs better than the original sorting build. We actually\n> researched every idea we knew from literature and intuition.\n> Although we consciously limited the search area by existing GiST API.\n> Stuff like penalty-based choose-subtree and split in equal halves\n> seriously limit possible solutions. If anyone knows an any better\n> way to build GiST faster or with better scan performance - please\n> let us know.\n> The proposed algorithm contains the current algorithm as a special\n> case: there is a parameter - the number of buffers accumulated\n> before calling Split. If this parameter is 1 proposed algorithm will\n> produce exactly the same output.\n> \n> At this stage, we would like to hear some feedback from Postgres and\n> PostGIS community. What other performance aspects should we test?\n> \n> Current patch implementation has some known deficiencies:\n> 1. We need a GUC instead of the hard-coded buffer of 8 pages.\n> 2. Is GiST sorting build still deterministic? If not - we should add\n> a fixed random seed into pageinspect tests.\n> 3. It would make sense to check the resulting indexes with amcheck\n> [0], although it's not committed.\n> 4. We cannot make an exact fillfactor due to the behavior of\n> picksplit. But can we improve anything here? I think if not - it's\n> still OK.\n> 5. GistSortedBuildPageState is no more page state. It's Level state\n> or something like that.\n> 6. The patch desperately needs comments.\n> \n> \n> Thanks!\n> \n> Best regards, Andrey Borodin.\n> \n> [0]\n> https://www.postgresql.org/message-id/flat/59D0DA6B-1652-4D44-B0EF-A582D5824F83%40yandex-team.ru\n> <https://www.postgresql.org/message-id/flat/59D0DA6B-1652-4D44-B0EF-A582D5824F83%40yandex-team.ru>\n> \n\n\n", "msg_date": "Sun, 9 Jan 2022 14:28:20 +0300", "msg_from": "\"sergei sh.\" <sshoulbakov@kontur.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi Aliaksandr,\n\nNice work on this. I've been following it a bit since the regression when\nit was noted and it sparked renewed interest in R-tree structure and\noptimization for me.\n\nAs for ideas. I'm not deep into details of postgresql and gist, but I've\nlearned that the node size for gist indexes are equal to the page size, and\nas such they are quite large in this context. This is what caused the\noverly flat tree when increasing the fill factor, if I'm not mistaken, and\nwhy it helps to decrease the fill factor.\n\nI think there is a large potential if tree node size could be tweaked to\nfit the use case and combine higher fill factor with optimal tree depth.\nIt's data dependent so it would even make sense to make it an index\nparameter, if possible.\n\nThere might be some deep reason in the architecture that I'm unaware of\nthat could make it difficult to affect the node size but regardless, I\nbelieve there could be a substantial win if node size could be controlled.\n\nRegards,\n\nBjörn\n\nDen mån 17 jan. 2022 kl 23:46 skrev Aliaksandr Kalenik <akalenik@kontur.io>:\n\n> Hey!\n>\n> Postgres 14 introduces an option to create a GiST index using a sort\n> method. It allows to create indexes much faster but as it had been\n> mentioned in sort support patch discussion the faster build performance\n> comes at cost of higher degree of overlap between pages than for indexes\n> built with regular method.\n>\n>\n> Sort support was implemented for GiST opclass in PostGIS but eventually\n> got removed as default behaviour in latest 3.2 release because as it had\n> been discovered by Paul Ramsey\n> https://lists.osgeo.org/pipermail/postgis-devel/2021-November/029225.html\n> performance of queries might degrade by 50%.\n>\n> Together with Darafei Praliaskouski, Andrey Borodin and me we tried\n> several approaches to solve query performance degrade:\n>\n> - The first attempt was try to decide whether to make a split\n> depending on direction of curve (Z-curve for Postgres geometry type,\n> Hilbert curve for PostGIS). It was implemented by filling page until\n> fillfactor / 2 and then checking penalty for every next item and keep\n> inserting in current page if penalty is 0 or start new page if penalty is\n> not 0. It turned out that with this approach index becomes significantly\n> larger whereas pages overlap still remains high.\n> - Andrey Borodin implemented LRU + split: a fixed amount of pages are\n> kept in memory and the best candidate page to insert the next item in is\n> selected by minimum penalty among these pages. If the best page for\n> insertion is full, it gets splited into multiple pages, and if the amount\n> of candidate pages after split exceeds the limit, the pages insertion to\n> which has not happened recently are flushed.\n> https://github.com/x4m/postgres_g/commit/0f2ed5f32a00f6c3019048e0c145b7ebda629e73.\n> We made some tests and while query performance speed using index built with\n> this approach is fine a size of index is extremely large.\n>\n> Eventually we made implementation of an idea outlined in sort support\n> patch discussion here\n> https://www.postgresql.org/message-id/flat/08173bd0-488d-da76-a904-912c35da446b@iki.fi#09ac9751a4cde897c99b99b2170faf3a\n> that several pages can be collected and then divided into actual index\n> pages by calling picksplit. My benchmarks with data provided in\n> postgis-devel show that query performance using index built with patched\n> sort support is comparable with performance of query using index built with\n> regular method. The size of index is also matches size of index built with\n> non-sorting method.\n>\n>\n> It should be noted that with the current implementation of the sorting\n> build method, pages are always filled up to fillfactor. This patch changes\n> this behavior to what it would be if using a non-sorting method, and pages\n> are not always filled to fillfactor for the sake of query performance. I'm\n> interested in improving it and I wonder if there are any ideas on this.\n>\n>\n> Benchmark summary:\n>\n>\n> create index roads_rdr_idx on roads_rdr using gist (geom);\n>\n>\n> with sort support before patch / CREATE INDEX 76.709 ms\n>\n> with sort support after patch / CREATE INDEX 225.238 ms\n>\n> without sort support / CREATE INDEX 446.071 ms\n>\n>\n> select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n>\n>\n> with sort support before patch / SELECT 5766.526 ms\n>\n> with sort support after patch / SELECT 2646.554 ms\n>\n> without sort support / SELECT 2721.718 ms\n>\n>\n> index size\n>\n>\n> with sort support before patch / IDXSIZE 2940928 bytes\n>\n> with sort support after patch / IDXSIZE 4956160 bytes\n>\n> without sort support / IDXSIZE 5447680 bytes\n>\n> More detailed:\n>\n> Before patch using sorted method:\n>\n>\n> postgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using\n> gist(geom);\n>\n> CREATE INDEX\n>\n> Time: 76.709 ms\n>\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 5766.526 ms (00:05.767)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 5880.142 ms (00:05.880)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 5778.437 ms (00:05.778)\n>\n>\n> postgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');\n>\n> gist_stat\n>\n> ------------------------------------------\n>\n> Number of levels: 3 +\n>\n> Number of pages: 359 +\n>\n> Number of leaf pages: 356 +\n>\n> Number of tuples: 93034 +\n>\n> Number of invalid tuples: 0 +\n>\n> Number of leaf tuples: 92676 +\n>\n> Total size of tuples: 2609260 bytes+\n>\n> Total size of leaf tuples: 2599200 bytes+\n>\n> Total size of index: 2940928 bytes+\n>\n>\n>\n> (1 row)\n>\n> After patch using sorted method:\n>\n> postgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using\n> gist(geom);\n>\n> CREATE INDEX\n>\n> Time: 225.238 ms\n>\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 2646.554 ms (00:02.647)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 2499.107 ms (00:02.499)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 2519.815 ms (00:02.520)\n>\n>\n> postgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');\n>\n> gist_stat\n>\n> ------------------------------------------\n>\n> Number of levels: 3 +\n>\n> Number of pages: 605 +\n>\n> Number of leaf pages: 600 +\n>\n> Number of tuples: 93280 +\n>\n> Number of invalid tuples: 0 +\n>\n> Number of leaf tuples: 92676 +\n>\n> Total size of tuples: 2619100 bytes+\n>\n> Total size of leaf tuples: 2602128 bytes+\n>\n> Total size of index: 4956160 bytes+\n>\n>\n>\n> (1 row)\n>\n> With index built using default method:\n>\n> postgres=# create index roads_rdr_geom_idx_no_sortsupport on roads_rdr\n> using gist(geom);\n>\n> CREATE INDEX\n>\n> Time: 446.071 ms\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 2721.718 ms (00:02.722)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> Time: 3549.549 ms (00:03.550)\n>\n> postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom &&\n> b.geom;\n>\n> count\n>\n> --------\n>\n> 505806\n>\n> (1 row)\n>\n>\n> postgres=# select gist_stat('roads_rdr_geom_idx_no_sortsupport');\n>\n> gist_stat\n>\n> ------------------------------------------\n>\n> Number of levels: 3 +\n>\n> Number of pages: 665 +\n>\n> Number of leaf pages: 660 +\n>\n> Number of tuples: 93340 +\n>\n> Number of invalid tuples: 0 +\n>\n> Number of leaf tuples: 92676 +\n>\n> Total size of tuples: 2621500 bytes+\n>\n> Total size of leaf tuples: 2602848 bytes+\n>\n> Total size of index: 5447680 bytes+\n>\n>\n> (1 row)\n>\n>\n>\n>\n\nHi Aliaksandr,Nice work on this. I've been following it a bit since the regression when it was noted and it sparked renewed interest in R-tree structure and optimization for me.As for ideas. I'm not deep into details of postgresql and gist, but I've learned that the node size for gist indexes are equal to the page size, and as such they are quite large in this context. This is what caused the overly flat tree when increasing the fill factor, if I'm not mistaken, and why it helps to decrease the fill factor.I think there is a large potential if tree node size could be tweaked to fit the use case and combine higher fill factor with optimal tree depth. It's data dependent so it would even make sense to make it an index parameter, if possible.There might be some deep reason in the architecture that I'm unaware of that could make it difficult to affect the node size but regardless, I believe there could be a substantial win if node size could be controlled.Regards,BjörnDen mån 17 jan. 2022 kl 23:46 skrev Aliaksandr Kalenik <akalenik@kontur.io>:\nHey!Postgres 14 introduces an option to create a GiST index using a sort method. It allows to create indexes much faster but as it had been mentioned in sort support patch discussion the faster build performance comes at cost of higher degree of overlap between pages than for indexes built with regular method.\n\nSort support was implemented for GiST opclass in PostGIS but eventually got removed as default behaviour in latest 3.2 release because as it had been discovered by Paul Ramsey https://lists.osgeo.org/pipermail/postgis-devel/2021-November/029225.html performance of queries might degrade by 50%.\n\nTogether with Darafei Praliaskouski, Andrey Borodin and me we tried several approaches to solve query performance degrade:\n\nThe first attempt was try to decide whether to make a split depending on direction of curve (Z-curve for Postgres geometry type, Hilbert curve for PostGIS). It was implemented by filling page until fillfactor / 2 and then checking penalty for every next item and keep inserting in current page if penalty is 0 or start new page if penalty is not 0. It turned out that with this approach index becomes significantly larger whereas pages overlap still remains high.\nAndrey Borodin implemented LRU + split: a fixed amount of pages are kept in memory and the best candidate page to insert the next item in is selected by minimum penalty among these pages. If the best page for insertion is full, it gets splited into multiple pages, and if the amount of candidate pages after split exceeds the limit, the pages insertion to which has not happened recently are flushed. https://github.com/x4m/postgres_g/commit/0f2ed5f32a00f6c3019048e0c145b7ebda629e73. We made some tests and while query performance speed using index built with this approach is fine a size of index is extremely large.\nEventually we made implementation of an idea outlined in sort support patch discussion here https://www.postgresql.org/message-id/flat/08173bd0-488d-da76-a904-912c35da446b@iki.fi#09ac9751a4cde897c99b99b2170faf3a that several pages can be collected and then divided into actual index pages by calling picksplit. My benchmarks with data provided in postgis-devel show that query performance using index built with patched sort support is comparable with performance of query using index built with regular method. The size of index is also matches size of index built with non-sorting method.\n\nIt should be noted that with the current implementation of the sorting build method, pages are always filled up to fillfactor. This patch changes this behavior to what it would be if using a non-sorting method, and pages are not always filled to fillfactor for the sake of query performance. I'm interested in improving it and I wonder if there are any ideas on this.\n\nBenchmark summary:\n\ncreate index roads_rdr_idx on roads_rdr using gist (geom);\n\nwith sort support before patch / CREATE INDEX 76.709 ms\nwith sort support after patch / CREATE INDEX 225.238 ms\nwithout sort support / CREATE INDEX 446.071 ms\n\nselect count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom;\n\nwith sort support before patch / SELECT 5766.526 ms\nwith sort support after patch / SELECT 2646.554 ms \nwithout sort support / SELECT 2721.718 ms\n\nindex size\n\nwith sort support before patch / IDXSIZE 2940928 bytes\nwith sort support after patch / IDXSIZE 4956160 bytes\nwithout sort support / IDXSIZE 5447680 bytesMore detailed:\nBefore patch using sorted method:postgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using gist(geom);CREATE INDEXTime: 76.709 mspostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 5766.526 ms (00:05.767)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 5880.142 ms (00:05.880)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 5778.437 ms (00:05.778)postgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');                gist_stat                 ------------------------------------------ Number of levels:          3            + Number of pages:           359          + Number of leaf pages:      356          + Number of tuples:          93034        + Number of invalid tuples:  0            + Number of leaf tuples:     92676        + Total size of tuples:      2609260 bytes+ Total size of leaf tuples: 2599200 bytes+ Total size of index:       2940928 bytes+ (1 row)After patch using sorted method:\npostgres=# create index roads_rdr_geom_idx_sortsupport on roads_rdr using gist(geom);CREATE INDEXTime: 225.238 mspostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 2646.554 ms (00:02.647)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 2499.107 ms (00:02.499)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 2519.815 ms (00:02.520)postgres=# select gist_stat('roads_rdr_geom_idx_sortsupport');                gist_stat                 ------------------------------------------ Number of levels:          3            + Number of pages:           605          + Number of leaf pages:      600          + Number of tuples:          93280        + Number of invalid tuples:  0            + Number of leaf tuples:     92676        + Total size of tuples:      2619100 bytes+ Total size of leaf tuples: 2602128 bytes+ Total size of index:       4956160 bytes+ \n(1 row)\nWith index built using default method:postgres=# create index roads_rdr_geom_idx_no_sortsupport on roads_rdr using gist(geom);CREATE INDEXTime: 446.071 mspostgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 2721.718 ms (00:02.722)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)Time: 3549.549 ms (00:03.550)postgres=# select count(*) from roads_rdr a, roads_rdr b where a.geom && b.geom; count  -------- 505806(1 row)postgres=# select gist_stat('roads_rdr_geom_idx_no_sortsupport');                gist_stat                 ------------------------------------------ Number of levels:          3            + Number of pages:           665          + Number of leaf pages:      660          + Number of tuples:          93340        + Number of invalid tuples:  0            + Number of leaf tuples:     92676        + Total size of tuples:      2621500 bytes+ Total size of leaf tuples: 2602848 bytes+ Total size of index:       5447680 bytes+\n(1 row)", "msg_date": "Mon, 17 Jan 2022 23:54:08 +0100", "msg_from": "=?UTF-8?Q?Bj=C3=B6rn_Harrtell?= <bjorn.harrtell@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "\n\n> 18 янв. 2022 г., в 03:54, Björn Harrtell <bjorn.harrtell@gmail.com> написал(а):\n> \n> There might be some deep reason in the architecture that I'm unaware of that could make it difficult to affect the node size but regardless, I believe there could be a substantial win if node size could be controlled.\n\nThat's kind of orthogonal development path. Some years ago I had posted \"GiST intrapage indexing\" patch [0], that was aiming to make a tree with fanout that is Sqrt(Items on page). But for now decreasing fillfactor == wasting a lot of space, both in shared_buffers and on disk...\n\nThank you for raising this topic, I think I should rebase and refresh that patch too...\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/7780A07B-4D04-41E2-B228-166B41D07EEE%40yandex-team.ru\n\n", "msg_date": "Tue, 18 Jan 2022 23:49:37 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi,\n\nI've addressed Andrey Borodin's concerns about v2 of this patch by \nAliaksandr\nKalenik in attached version. Change list:\n* Number of pages to collect moved to GUC parameter \n\"gist_sorted_build_page_buffer_size\".\n* GistSortedBuildPageState type renamed to GistSortedBuildLevelState.\n* Comments added.\n\nSorted build remaind deterministic as long as picksplit implementation \nfor given\nopclass is, which seem to be true for builtin types, so setting random \nseed is\nnot required for testing.\n\nAndrey Borodin's GiST support patch for amcheck was used to verify built \nindexes:\nhttps://commitfest.postgresql.org/25/1800/\nPSA modified version working with current Postgres code (btree functions\nremoved).", "msg_date": "Tue, 18 Jan 2022 23:26:05 +0300", "msg_from": "\"sergei sh.\" <sshoulbakov@kontur.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hello hackers,\n\nOn Tue, Jan 18, 2022 at 11:26 PM sergei sh. <sshoulbakov@kontur.io> wrote:\n\n> Hi,\n>\n> I've addressed Andrey Borodin's concerns about v2 of this patch by\n> Aliaksandr\n> Kalenik in attached version.\n>\n\n[snip]\n\nThis patchset got some attention in the PostGIS development channel, as it\nis important to really enable the fast GiST build there for the end user.\nThe reviews are positive, it saves build time and performs as well as\noriginal non-sorting build on tested workloads.\n\nTest by Giuseppe Broccolo:\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-January/029330.html\n\nTest by Regina Obe:\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-January/029335.html\n\nI believe this patch is an important addition to Postgres 15 and will make\na lot of GIS users happier.\n\nThanks,\nDarafei.\n\nHello hackers,On Tue, Jan 18, 2022 at 11:26 PM sergei sh. <sshoulbakov@kontur.io> wrote:Hi,\n\nI've addressed Andrey Borodin's concerns about v2 of this patch by \nAliaksandr\nKalenik in attached version.[snip] This patchset got some attention in the PostGIS development channel, as it is important to really enable the fast GiST build there for the end user. The reviews are positive, it saves build time and performs as well as original non-sorting build on tested workloads.Test by Giuseppe Broccolo: https://lists.osgeo.org/pipermail/postgis-devel/2022-January/029330.htmlTest by Regina Obe: https://lists.osgeo.org/pipermail/postgis-devel/2022-January/029335.htmlI believe this patch is an important addition to Postgres 15 and will make a lot of GIS users happier.Thanks,Darafei.", "msg_date": "Tue, 18 Jan 2022 23:48:19 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "\n\n> 19 янв. 2022 г., в 01:26, sergei sh. <sshoulbakov@kontur.io> написал(а):\n> \n> Hi,\n> \n> I've addressed Andrey Borodin's concerns about v2 of this patch by Aliaksandr\n> Kalenik in attached version. \n\nThank you! I'll make a new iteration of review. From a first glance everything looks good, but gist_sorted_build_page_buffer_size haven't any documentation....\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Wed, 19 Jan 2022 09:31:59 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "\n\n> 19 янв. 2022 г., в 09:31, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>> \n>> I've addressed Andrey Borodin's concerns about v2 of this patch by Aliaksandr\n>> Kalenik in attached version. \n> \n> Thank you! I'll make a new iteration of review. From a first glance everything looks good, but gist_sorted_build_page_buffer_size haven't any documentation....\n\nI've made one more iteration. The code generally looks OK to me.\n\nSome nitpicking:\n1. gist_sorted_build_page_buffer_size is not documented yet\n2. Comments correctly state that check for interrupts is done once per whatever. Let's make \"whatever\" == \"1 page flush\" again.\n3. There is \"Size i\" in a loop. I haven't found usage of Size, but many size_t-s. For the same purpose in the same file mostly \"int i\" is used.\n4. Many multiline comments are formatted in an unusual manner.\n\nBesides this I think the patch is ready for committer.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Sun, 23 Jan 2022 14:33:29 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi,\n\nOn 2022-01-18 23:26:05 +0300, sergei sh. wrote:\n> I've addressed Andrey Borodin's concerns about v2 of this patch by\n> Aliaksandr\n> Kalenik in attached version. Change list:\n> * Number of pages to collect moved to GUC parameter\n> \"gist_sorted_build_page_buffer_size\".\n> * GistSortedBuildPageState type renamed to GistSortedBuildLevelState.\n> * Comments added.\n\nUnfortunately the tests don't seem to pass on any platform, starting with this\nversion:\nhttps://cirrus-ci.com/build/4808414281859072\n\nPresumably the fault of v1-amcheck-gist-no-btree.patch, which comments out\nsome functions. I assume that patch isn't intended to be part of the\nsubmission?\n\nhttps://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 14:17:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "On 1/23/22 12:33, Andrey Borodin wrote:\n> \n> \n>> 19 янв. 2022 г., в 09:31, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>>\n>>> I've addressed Andrey Borodin's concerns about v2 of this patch by Aliaksandr\n>>> Kalenik in attached version.\n>>\n>> Thank you! I'll make a new iteration of review. From a first glance everything looks good, but gist_sorted_build_page_buffer_size haven't any documentation....\n> \n> I've made one more iteration. The code generally looks OK to me.\n> \n> Some nitpicking:\n> 1. gist_sorted_build_page_buffer_size is not documented yet\n> 2. Comments correctly state that check for interrupts is done once per whatever. Let's make \"whatever\" == \"1 page flush\" again.\n> 3. There is \"Size i\" in a loop. I haven't found usage of Size, but many size_t-s. For the same purpose in the same file mostly \"int i\" is used.\n> 4. Many multiline comments are formatted in an unusual manner.\n> \n> Besides this I think the patch is ready for committer.\n> \n> Thanks!\n> \n> Best regards, Andrey Borodin.\n> \n\nHi,\n\nI've fixed issues 2 -- 4 in attached version.\n\nGUC parameter has been removed for the number of pages to collect\nbefore splitting and fixed value of 4 is used instead, as discussed\noff-list with Andrey Borodin, Aliaksandr Kalenik, Darafei\nPraliaskouski. Benchmarking shows that using higher values has almost\nno effect on query efficiency while increasing index building time.\n\nPSA graphs for index creation and query time, \"tiling\" and \"self-join\"\nrefer to queries used in previous benchmarks:\nhttps://github.com/mngr777/pg_index_bm2\n\nSorted build method description has been added in GiST README.", "msg_date": "Wed, 26 Jan 2022 19:07:00 +0300", "msg_from": "\"sergei sh.\" <sshoulbakov@kontur.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi!\n\nOn Wed, Jan 26, 2022 at 7:07 PM sergei sh. <sshoulbakov@kontur.io> wrote:\n> I've fixed issues 2 -- 4 in attached version.\n>\n> GUC parameter has been removed for the number of pages to collect\n> before splitting and fixed value of 4 is used instead, as discussed\n> off-list with Andrey Borodin, Aliaksandr Kalenik, Darafei\n> Praliaskouski. Benchmarking shows that using higher values has almost\n> no effect on query efficiency while increasing index building time.\n>\n> PSA graphs for index creation and query time, \"tiling\" and \"self-join\"\n> refer to queries used in previous benchmarks:\n> https://github.com/mngr777/pg_index_bm2\n>\n> Sorted build method description has been added in GiST README.\n\nThank you for the revision. This patch looks good to me. I've\nslightly adjusted comments and formatting and wrote the commit\nmessage.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 4 Feb 2022 03:52:56 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" }, { "msg_contents": "Hi,\n\n04.02.2022 03:52, Alexander Korotkov wrote:\n> Thank you for the revision. This patch looks good to me. I've\n> slightly adjusted comments and formatting and wrote the commit\n> message.\n>\n> I'm going to push this if no objections.\n\nWhile exploring the gist test coverage (that is discussed in [1])\nI've found that this block in regress/sql/gist.sql:\n-- rebuild the index with a different fillfactor\nalter index gist_pointidx SET (fillfactor = 40);\nreindex index gist_pointidx;\n\ndoesn't do what is declared.\nIn fact fillfactor is ignored by default now. I've added:\nselect pg_relation_size('gist_pointidx');\nafter reindex and get the same size with any fillfactor.\nfillfactor = 40: pg_relation_size = 122880\nfillfactor = 100: pg_relation_size = 122880\n\nThough size of the index really changes on REL_14_STABLE:\nfillfactor = 40: pg_relation_size = 294912\nfillfactor = 100: pg_relation_size = 122880\n\nI've found that the behavior changed after f1ea98a79. I see a comment there:\n/* fillfactor ignored */\nbut maybe this change should be reflected on higher levels (tests, docs [2],\nRN) too?\n\nFor now the fillfactor option still works for the buffering build, but maybe\nit could be just made unsupported as it is not supported for gist, brin...\n\n[1] https://www.postgresql.org/message-id/flat/20230331050726.agslrnb7e7sqtpw2%40awork3.anarazel.de\n[2] https://www.postgresql.org/docs/devel/sql-createindex.html\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 5 Apr 2023 07:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] reduce page overlap of GiST indexes built using sorted\n method" } ]
[ { "msg_contents": "Noah suggested in [1] that we should make an effort to allow any one\nof the core regression tests to be run mostly standalone (i.e., after\nrunning only the test_setup script), so as to allow quicker iterations\nwhen adjusting a script. This'd presumably also lead to the tests\nbeing more independent, which seems like a good thing. I spent a\nbit of time looking into this idea, and attached are a couple of\ndraft patches for discussion.\n\nI soon realized that complete independence was probably infeasible,\nand not very useful anyway. Notably, it doesn't seem useful to get\nrid of the geometry script's dependencies on the per-geometric-type\nscripts, nor of horology's dependencies on the per-datetime-type\nscripts. I suppose we could think of just merging the per-type\nscripts into geometry and horology, but that does not seem like an\nimprovement. So my goal here is to get rid of *most* dependencies,\nand ensure that the remainder are documented in the parallel_schedule\nfile. Also note that I'm explicitly not promising that the tests\ncan now run in any order --- I've made no attempt to get rid of\n\"A can't run before B\" or \"A can't run concurrently with B\"\nrestrictions.\n\n0001 below gets rid of dependencies on the create_function_N scripts,\nby moving functions they define into either the particular script\nthat uses the function (for the ones referenced in only one script,\nwhich is most) or into the test_setup script. It turns out that\ncreate_function_1 and create_function_2 go away entirely, because\nnothing's left. While I've not done so here, I'm tempted to rename\ncreate_function_0 to create_function_c and create_function_3 to\ncreate_function_sql, to give them better-defined charters and\neliminate the confusion with trailing digits for variant files.\n(With that division of labor in mind, 0001 does move a couple of\nSQL functions from create_function_0 to create_function_3.)\n\n0001 also moves some hash functions that were created in insert.sql\ninto test_setup, because they were also used elsewhere. I also\ncleaned up some other type-related script interdependencies, by\nconsolidating the \"widget\"-related code into create_type, removing\na dependency on the custom path ## path operator in favor of the\nequivalent built-in ?# operator, and declaring the textrange and\nfloat8range types in test_setup. Lastly, 0001 fixes the\ntab_core_types test case in type_sanity so that it only covers\nbuilt-in types, not types that randomly happen to be created in\ntest scripts that run before type_sanity.\n\n0002 performs a similar set of transformations to get rid of\ntable-related script interdependencies. I identified a dozen or so\ntables that are used in multiple scripts and (for the most part)\nare not modified once filled. I moved the creation and filling of\nthose into test_setup. There were also some tables that were really\nonly used in one script, so I could move their creation and filling to\nthat script, leaving no cross-script dependencies on create_table.sql\nor copy.sql. I made some other adjustments to get rid of incidental\ncross-script dependencies. There are a lot more judgment calls in\n0002 than 0001, though, so people might have objections or better\nideas. Notably:\n\n* A few scripts insisted on modifying the \"shared\" tables, which\nseemed like something to get rid of. What I did, to minimize the\ndiffs in these scripts, was to make them create temporary tables\nof the same names and then scribble on the temp tables. There's\nan argument to be made that this will be too confusing and we'd be\nbetter off changing the scripts to use different names for these\nlocal tables. That'd make the patch even bulkier, though.\n\n* create_index made some indexes on circle_tbl and polygon_tbl,\nwhich I didn't want to treat as shared tables. I moved those\nindexes and the associated test queries to the end of geometry.sql.\nThey could have been made in circle.sql and polygon.sql,\nbut I was worried that that would possibly change plans for\nexisting queries in geometry.sql.\n\n* create_index also had some queries on array_op_test, which\nI'm now treating as private to arrays.sql. The purpose of\nthose was to compare index-free results to indexable queries\non array_index_op_test, which is now private to create_index.\nSo what I did was to replace those by doing the same queries\non array_index_op_test before building its indexes. This is\na better way anyway since it doesn't require the unstated\nassumption that array_op_test and array_index_op_test\ncontain identical data.\n\n* The situation with a_star and its child tables was a bit of a mess.\nThey were created in create_table.sql, populated in create_misc.sql,\nthen misc.sql did significant DDL on them, and finally select_parallel\nused them in queries (and would fail outright if the DDL changes\nhadn't been made). What I've done here is to move the create_table\nand misc steps into create_misc, and then allow select_parallel to\ndepend on create_misc. You could argue for chopping that up\ndifferently, perhaps, but I'm not seeing alternatives I like better.\n\n* Having established the precedent that I'd allow some cross-script\ndependencies on create_misc, I adjusted a couple of places that\nwere depending on the \"b\" table made by inherit.sql to depend on\ncreate_misc's b_star, which has just about the same schema including\nchildren. I figured multiple dependencies on create_misc was better\nthan some on create_misc and some on inherit. (So maybe there's\na case for moving that entire sequence into test_setup? But it\nseems like a big hunk that doesn't belong there.)\n\n* Another table with an unreasonably large footprint was the \"tmp\"\ntable made (but not used) in select.sql, used in select_distinct and\nselect_distinct_on, and then modified and eventually dropped in\nmisc.sql. It's just luck this doesn't collide with usages of\ntables named \"tmp\" in some other scripts. Since \"tmp\" is just a\ncopy of some columns from \"onek\", I adjusted select_distinct and\nselect_distinct_on to select from \"onek\" instead, and then\nconsolidated the usage of the table into misc.sql. (I'm half\ntempted to drop the table and test cases from misc altogether.\nThe comments there indicate that this is a 25-year-old test for\nsome b-tree problem or other --- but tmp has no indexes, so it\ncan't any longer be testing what it was intended to. But removing\ntest cases is not in the charter of this patch series, I guess.)\n\n* expressions.sql had some BETWEEN tests depending on date_tbl,\nwhich I resolved by moving those tests to horology.sql. We could\nalternatively change them to use some other table/datatype, or\njust accept the extra dependency.\n\n* The rules and sanity_check scripts are problematic because\ntheir results depend heavily on just which scripts execute\nbefore them. In this patch I've adopted a big hammer:\nI trimmed rules' output by restricting it to only print\ninfo about pg_catalog relations, and I dropped the troublesome\nsanity_check query altogether. I don't think that sanity_check\nquery has any real use, certainly not enough to justify the\nmaintenance effort we've put into it over the years. Maybe\nthere's an objection to restricting the coverage of rules,\nthough. (One idea to exercise ruleutils.c more is to let that\nquery cover information_schema as well as pg_catalog. Local\ncode-coverage testing says there's not much difference, though.)\n\nSome things I'm not totally happy about:\n\n* Testing shows that quite a few scripts have dependencies on\ncreate_index, because their EXPLAIN output or row output order\nvaries if the indexes aren't there. This dependency could\nlikely be removed by moving creation of some of the indexes on\nthe \"shared\" tables into test_setup, but I'm unconvinced whether\nthat's a good thing to do or not. I can live with documenting\ncreate_index as a common dependency.\n\n* I treated point_tbl as a shared table, but I'm not sure that's a\ngreat idea, especially since the non-geometry consumers of point_tbl\nboth want to scribble on it. Doing something else would be more\ninvasive though.\n\n* psql.sql has a dependency on create_am, because the \"heap2\" access\nmethod that that creates shows up in psql's output. This seems fairly\nannoying, since there's no good semantic excuse for such coupling.\nOne quick-and-dirty workaround could be to run the psql test before\ncreate_am.\n\n* amutils depends on indexes from all over the map, so it\nhas a rather horrid dependency list. Perhaps we should change\nit to print info about indexes it manufactures locally.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\nPS: To save anyone else the work of reinventing it, I attach\na script I used to confirm that the modified test scripts have\nno unexpected dependencies. I don't propose to commit this,\nespecially not in its current hacky state of overwriting the\nparallel_schedule file. (Maybe we should provide a way to\nrun specified test script(s) *without* invoking the whole\nschedule first.)\n\n[1] https://www.postgresql.org/message-id/20211217182518.GA2529654%40rfd.leadboat.com", "msg_date": "Fri, 24 Dec 2021 17:00:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Refactoring the regression tests for more independence" }, { "msg_contents": "On Fri, Dec 24, 2021 at 05:00:17PM -0500, Tom Lane wrote:\n> While I've not done so here, I'm tempted to rename\n> create_function_0 to create_function_c and create_function_3 to\n> create_function_sql, to give them better-defined charters and\n> eliminate the confusion with trailing digits for variant files.\n\n+1\n\n> (Maybe we should provide a way to run specified test script(s) *without*\n> invoking the whole schedule first.)\n\n+1 ; it can be done later, though.\n\nIt's nice to be able to get feedback within a few seconds. That supports the \nidea of writing tests earlier.\n\nI guess this may expose some instabilities due to timing of autovacuum (which\nI'd say is a good thing).\n\nIf you rearrange the creation of objects, that may provide an opportunity to\nrename some tables with too-short names, since backpatching would already have\nconflicts.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 2 Jan 2022 11:49:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Not too surprisingly, these patches broke during the commitfest.\nHere's a rebased version.\n\nI'm not sure that anyone wants to review these in detail ...\nshould I just go ahead and push them?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 07 Feb 2022 14:00:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "On Mon, Feb 07, 2022 at 02:00:25PM -0500, Tom Lane wrote:\n> Not too surprisingly, these patches broke during the commitfest.\n> Here's a rebased version.\n> \n> I'm not sure that anyone wants to review these in detail ...\n> should I just go ahead and push them?\n\nI don't see anything shocking after a quick glance, and I don't think any\nreview is going to give any more confidence compared to the script-dep-testing\nscript, so +1 for pushing them since the cf bot is green again.\n\n\n", "msg_date": "Tue, 8 Feb 2022 10:57:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Feb 07, 2022 at 02:00:25PM -0500, Tom Lane wrote:\n>> Not too surprisingly, these patches broke during the commitfest.\n>> Here's a rebased version.\n>> I'm not sure that anyone wants to review these in detail ...\n>> should I just go ahead and push them?\n\n> I don't see anything shocking after a quick glance, and I don't think any\n> review is going to give any more confidence compared to the script-dep-testing\n> script, so +1 for pushing them since the cf bot is green again.\n\nDone, will watch the farm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Feb 2022 15:42:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Hi hackers,\n\n> > I don't see anything shocking after a quick glance, and I don't think any\n> > review is going to give any more confidence compared to the script-dep-testing\n> > script, so +1 for pushing them since the cf bot is green again.\n>\n> Done, will watch the farm.\n\nI wanted to test one of the patches we have for the July CF on the\nRaspberry Pi 3 Model B+. It runs Raspbian GNU/Linux 10 (buster) and\nLinux Kernel 5.10.60-v7+.\n\nI discovered that the PostgreSQL tests don't pass in this environment.\nUsing `git bisect` I was able to pinpoint the problem to cc50080a and\nfound this thread. Currently `REL_15_STABLE` and `master` are\naffected.\n\nTo build PostgreSQL I use my regular set of scripts [1] and the\nfollowing command:\n\n```\n./quick-build.sh && ./single-install.sh && make installcheck\n``\n\nregression.diffs and regression.out are attached. The same tests pass\njust fine on Ubuntu Linux and MacOS.\n\nI didn't investigate the problem further since it's pretty late in my\ntimezone. I just wanted to share my current discoveries with you.\nSince we have several agents on buildfarm running Raspbian that are\npretty happy with the patch I would guess this may have something to\ndo with particular flags and/or configure options I'm using.\n\n[1]: https://github.com/afiskon/pgscripts/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 22 Jul 2022 22:21:35 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I wanted to test one of the patches we have for the July CF on the\n> Raspberry Pi 3 Model B+. It runs Raspbian GNU/Linux 10 (buster) and\n> Linux Kernel 5.10.60-v7+.\n> I discovered that the PostgreSQL tests don't pass in this environment.\n\nSince you haven't explained what's different about this environment,\nit's hard to comment on these results. But is this really a stock\nPostgres source tree, with no local modifications? The fragment of\nsrc/test/regress/expected/copy.out that you show does not look\ncurrent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 15:42:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Hi Tom,\n\n> Since you haven't explained what's different about this environment,\n> it's hard to comment on these results. But is this really a stock\n> Postgres source tree, with no local modifications? The fragment of\n> src/test/regress/expected/copy.out that you show does not look\n> current.\n\nYes, this is a stock PostgreSQL source code without any modification,\nwith `git clean -dfx` etc.\n\nThe fragment of copy.out probably doesn't look current because I was\nusing `git bisect` and I'm on cc50080a82 right now. However the same\ntests fail on both `master` and `REL_15_STABLE`. It takes a while on\nRaspberry Pi to rebuild Postgres :)\n\nTo clarify, the step that is failing is `./quick-build.sh`, or `make\ncheck' in this script to be precise. So postgresql.conf I'm using in\nsingle-install.sh has nothing to do with the problem, this step is not\nreached.\n\nSorry about the confusion regarding the environment differences. GCC\nversion is 8.3.0, Perl 5.28.1. All in all this is pretty much the\ndefault Raspbian 10 environment, something you would typically get\nafter setting up your RPi 3 B+ using Raspberry Pi Imager and running\n`apt update`, nothing exotic. Please let me know if there are any\nother details of interest.\n\nI'll continue looking for the source of the problem and will post an\nupdate as soon as I have one.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Jul 2022 22:58:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Sorry about the confusion regarding the environment differences. GCC\n> version is 8.3.0, Perl 5.28.1. All in all this is pretty much the\n> default Raspbian 10 environment, something you would typically get\n> after setting up your RPi 3 B+ using Raspberry Pi Imager and running\n> `apt update`, nothing exotic. Please let me know if there are any\n> other details of interest.\n\nFWIW, I tried to replicate this locally on my own RPi3B+, using\ncurrent Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-1066-raspi aarch64).\nNo luck: it all works fine for me. We have at least one Raspbian\nbuildfarm animal too, and it's not been unhappy either. I suspect\nthere is something odd about your environment settings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 18:58:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Refactoring the regression tests for more independence" }, { "msg_contents": "Hi Tom,\n\n> FWIW, I tried to replicate this locally on my own RPi3B+, using\n> current Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-1066-raspi aarch64).\n> No luck: it all works fine for me. We have at least one Raspbian\n> buildfarm animal too, and it's not been unhappy either. I suspect\n> there is something odd about your environment settings.\n\nThanks for sharing this.\n\nI repeated the experiment in a clean environment (Raspbian installed\nfrom scratch on a brand new SD-card) and can confirm that the problem\nis gone.\n\nSorry for the disturbance.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 23 Jul 2022 13:58:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Refactoring the regression tests for more independence" } ]
[ { "msg_contents": "Executing generic plans involving partitions is known to become slower\nas partition count grows due to a number of bottlenecks, with\nAcquireExecutorLocks() showing at the top in profiles.\n\nPrevious attempt at solving that problem was by David Rowley [1],\nwhere he proposed delaying locking of *all* partitions appearing under\nan Append/MergeAppend until \"initial\" pruning is done during the\nexecutor initialization phase. A problem with that approach that he\nhas described in [2] is that leaving partitions unlocked can lead to\nrace conditions where the Plan node belonging to a partition can be\ninvalidated when a concurrent session successfully alters the\npartition between AcquireExecutorLocks() saying the plan is okay to\nexecute and then actually executing it.\n\nHowever, using an idea that Robert suggested to me off-list a little\nwhile back, it seems possible to determine the set of partitions that\nwe can safely skip locking. The idea is to look at the \"initial\" or\n\"pre-execution\" pruning instructions contained in a given Append or\nMergeAppend node when AcquireExecutorLocks() is collecting the\nrelations to lock and consider relations from only those sub-nodes\nthat survive performing those instructions. I've attempted\nimplementing that idea in the attached patch.\n\nNote that \"initial\" pruning steps are now performed twice when\nexecuting generic plans: once in AcquireExecutorLocks() to find\npartitions to be locked, and a 2nd time in ExecInit[Merge]Append() to\ndetermine the set of partition sub-nodes to be initialized for\nexecution, though I wasn't able to come up with a good idea to avoid\nthis duplication.\n\nUsing the following benchmark setup:\n\npgbench testdb -i --partitions=$nparts > /dev/null 2>&1\npgbench -n testdb -S -T 30 -Mprepared\n\nAnd plan_cache_mode = force_generic_plan,\n\nI get following numbers:\n\nHEAD:\n\n32 tps = 20561.776403 (without initial connection time)\n64 tps = 12553.131423 (without initial connection time)\n128 tps = 13330.365696 (without initial connection time)\n256 tps = 8605.723120 (without initial connection time)\n512 tps = 4435.951139 (without initial connection time)\n1024 tps = 2346.902973 (without initial connection time)\n2048 tps = 1334.680971 (without initial connection time)\n\nPatched:\n\n32 tps = 27554.156077 (without initial connection time)\n64 tps = 27531.161310 (without initial connection time)\n128 tps = 27138.305677 (without initial connection time)\n256 tps = 25825.467724 (without initial connection time)\n512 tps = 19864.386305 (without initial connection time)\n1024 tps = 18742.668944 (without initial connection time)\n2048 tps = 16312.412704 (without initial connection time)\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_kfRQ3ZpjQyHC7=PK9vrhxiHBQFZ+hc0JCwwnRKkF3hg@mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/CAKJS1f99JNe%2Bsw5E3qWmS%2BHeLMFaAhehKO67J1Ym3pXv0XBsxw%40mail.gmail.com", "msg_date": "Sat, 25 Dec 2021 12:36:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "generic plans and \"initial\" pruning" }, { "msg_contents": "On Sat, Dec 25, 2021 at 9:06 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Executing generic plans involving partitions is known to become slower\n> as partition count grows due to a number of bottlenecks, with\n> AcquireExecutorLocks() showing at the top in profiles.\n>\n> Previous attempt at solving that problem was by David Rowley [1],\n> where he proposed delaying locking of *all* partitions appearing under\n> an Append/MergeAppend until \"initial\" pruning is done during the\n> executor initialization phase. A problem with that approach that he\n> has described in [2] is that leaving partitions unlocked can lead to\n> race conditions where the Plan node belonging to a partition can be\n> invalidated when a concurrent session successfully alters the\n> partition between AcquireExecutorLocks() saying the plan is okay to\n> execute and then actually executing it.\n>\n> However, using an idea that Robert suggested to me off-list a little\n> while back, it seems possible to determine the set of partitions that\n> we can safely skip locking. The idea is to look at the \"initial\" or\n> \"pre-execution\" pruning instructions contained in a given Append or\n> MergeAppend node when AcquireExecutorLocks() is collecting the\n> relations to lock and consider relations from only those sub-nodes\n> that survive performing those instructions. I've attempted\n> implementing that idea in the attached patch.\n>\n\nIn which cases, we will have \"pre-execution\" pruning instructions that\ncan be used to skip locking partitions? Can you please give a few\nexamples where this approach will be useful?\n\nThe benchmark is showing good results, indeed.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 28 Dec 2021 18:42:00 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Dec 28, 2021 at 22:12 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Sat, Dec 25, 2021 at 9:06 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >\n> > Executing generic plans involving partitions is known to become slower\n> > as partition count grows due to a number of bottlenecks, with\n> > AcquireExecutorLocks() showing at the top in profiles.\n> >\n> > Previous attempt at solving that problem was by David Rowley [1],\n> > where he proposed delaying locking of *all* partitions appearing under\n> > an Append/MergeAppend until \"initial\" pruning is done during the\n> > executor initialization phase. A problem with that approach that he\n> > has described in [2] is that leaving partitions unlocked can lead to\n> > race conditions where the Plan node belonging to a partition can be\n> > invalidated when a concurrent session successfully alters the\n> > partition between AcquireExecutorLocks() saying the plan is okay to\n> > execute and then actually executing it.\n> >\n> > However, using an idea that Robert suggested to me off-list a little\n> > while back, it seems possible to determine the set of partitions that\n> > we can safely skip locking. The idea is to look at the \"initial\" or\n> > \"pre-execution\" pruning instructions contained in a given Append or\n> > MergeAppend node when AcquireExecutorLocks() is collecting the\n> > relations to lock and consider relations from only those sub-nodes\n> > that survive performing those instructions. I've attempted\n> > implementing that idea in the attached patch.\n> >\n>\n> In which cases, we will have \"pre-execution\" pruning instructions that\n> can be used to skip locking partitions? Can you please give a few\n> examples where this approach will be useful?\n\n\nThis is mainly to be useful for prepared queries, so something like:\n\nprepare q as select * from partitioned_table where key = $1;\n\nAnd that too when execute q(…) uses a generic plan. Generic plans are\nproblematic because it must contain nodes for all partitions (without any\nplan time pruning), which means CheckCachedPlan() has to spend time\nproportional to the number of partitions to determine that the plan is\nstill usable / has not been invalidated; most of that is\nAcquireExecutorLocks().\n\nOther bottlenecks, not addressed in this patch, pertain to some executor\nstartup/shutdown subroutines that process the range table of a PlannedStmt\nin its entirety, whose length is also proportional to the number of\npartitions when the plan is generic.\n\nThe benchmark is showing good results, indeed.\n\n\nThanks.\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\nOn Tue, Dec 28, 2021 at 22:12 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Sat, Dec 25, 2021 at 9:06 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Executing generic plans involving partitions is known to become slower\n> as partition count grows due to a number of bottlenecks, with\n> AcquireExecutorLocks() showing at the top in profiles.\n>\n> Previous attempt at solving that problem was by David Rowley [1],\n> where he proposed delaying locking of *all* partitions appearing under\n> an Append/MergeAppend until \"initial\" pruning is done during the\n> executor initialization phase.  A problem with that approach that he\n> has described in [2] is that leaving partitions unlocked can lead to\n> race conditions where the Plan node belonging to a partition can be\n> invalidated when a concurrent session successfully alters the\n> partition between AcquireExecutorLocks() saying the plan is okay to\n> execute and then actually executing it.\n>\n> However, using an idea that Robert suggested to me off-list a little\n> while back, it seems possible to determine the set of partitions that\n> we can safely skip locking.  The idea is to look at the \"initial\" or\n> \"pre-execution\" pruning instructions contained in a given Append or\n> MergeAppend node when AcquireExecutorLocks() is collecting the\n> relations to lock and consider relations from only those sub-nodes\n> that survive performing those instructions.   I've attempted\n> implementing that idea in the attached patch.\n>\n\nIn which cases, we will have \"pre-execution\" pruning instructions that\ncan be used to skip locking partitions? Can you please give a few\nexamples where this approach will be useful?This is mainly to be useful for prepared queries, so something like:prepare q as select * from partitioned_table where key = $1;And that too when execute q(…) uses a generic plan. Generic plans are problematic because it must contain nodes for all partitions (without any plan time pruning), which means CheckCachedPlan() has to spend time proportional to the number of partitions to determine that the plan is still usable / has not been invalidated; most of that is AcquireExecutorLocks().Other bottlenecks, not addressed in this patch, pertain to some executor startup/shutdown subroutines that process the range table of a PlannedStmt in its entirety, whose length is also proportional to the number of partitions when the plan is generic.\nThe benchmark is showing good results, indeed.Thanks.-- Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Fri, 31 Dec 2021 11:26:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 31, 2021 at 7:56 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Dec 28, 2021 at 22:12 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> On Sat, Dec 25, 2021 at 9:06 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> >\n>> > Executing generic plans involving partitions is known to become slower\n>> > as partition count grows due to a number of bottlenecks, with\n>> > AcquireExecutorLocks() showing at the top in profiles.\n>> >\n>> > Previous attempt at solving that problem was by David Rowley [1],\n>> > where he proposed delaying locking of *all* partitions appearing under\n>> > an Append/MergeAppend until \"initial\" pruning is done during the\n>> > executor initialization phase. A problem with that approach that he\n>> > has described in [2] is that leaving partitions unlocked can lead to\n>> > race conditions where the Plan node belonging to a partition can be\n>> > invalidated when a concurrent session successfully alters the\n>> > partition between AcquireExecutorLocks() saying the plan is okay to\n>> > execute and then actually executing it.\n>> >\n>> > However, using an idea that Robert suggested to me off-list a little\n>> > while back, it seems possible to determine the set of partitions that\n>> > we can safely skip locking. The idea is to look at the \"initial\" or\n>> > \"pre-execution\" pruning instructions contained in a given Append or\n>> > MergeAppend node when AcquireExecutorLocks() is collecting the\n>> > relations to lock and consider relations from only those sub-nodes\n>> > that survive performing those instructions. I've attempted\n>> > implementing that idea in the attached patch.\n>> >\n>>\n>> In which cases, we will have \"pre-execution\" pruning instructions that\n>> can be used to skip locking partitions? Can you please give a few\n>> examples where this approach will be useful?\n>\n>\n> This is mainly to be useful for prepared queries, so something like:\n>\n> prepare q as select * from partitioned_table where key = $1;\n>\n> And that too when execute q(…) uses a generic plan. Generic plans are problematic because it must contain nodes for all partitions (without any plan time pruning), which means CheckCachedPlan() has to spend time proportional to the number of partitions to determine that the plan is still usable / has not been invalidated; most of that is AcquireExecutorLocks().\n>\n> Other bottlenecks, not addressed in this patch, pertain to some executor startup/shutdown subroutines that process the range table of a PlannedStmt in its entirety, whose length is also proportional to the number of partitions when the plan is generic.\n>\n>> The benchmark is showing good results, indeed.\n>\nIndeed.\n\nHere are few comments for v1 patch:\n\n+ /* Caller error if we get here without contains_init_steps */\n+ Assert(pruneinfo->contains_init_steps);\n\n- prunedata = prunestate->partprunedata[i];\n- pprune = &prunedata->partrelprunedata[0];\n\n- /* Perform pruning without using PARAM_EXEC Params */\n- find_matching_subplans_recurse(prunedata, pprune, true, &result);\n+ if (parentrelids)\n+ *parentrelids = NULL;\n\nYou got two blank lines after Assert.\n--\n\n+ /* Set up EState if not in the executor proper. */\n+ if (estate == NULL)\n+ {\n+ estate = CreateExecutorState();\n+ estate->es_param_list_info = params;\n+ free_estate = true;\n }\n\n... [Skip]\n\n+ if (free_estate)\n+ {\n+ FreeExecutorState(estate);\n+ estate = NULL;\n }\n\nI think this work should be left to the caller.\n--\n\n /*\n * Stuff that follows matches exactly what ExecCreatePartitionPruneState()\n * does, except we don't need a PartitionPruneState here, so don't call\n * that function.\n *\n * XXX some refactoring might be good.\n */\n\n+1, while doing it would be nice if foreach_current_index() is used\ninstead of the i & j sequence in the respective foreach() block, IMO.\n--\n\n+ while ((i = bms_next_member(validsubplans, i)) >= 0)\n+ {\n+ Plan *subplan = list_nth(subplans, i);\n+\n+ context->relations =\n+ bms_add_members(context->relations,\n+ get_plan_scanrelids(subplan));\n+ }\n\nI think instead of get_plan_scanrelids() the\nGetLockableRelations_worker() can be used; if so, then no need to add\nget_plan_scanrelids() function.\n--\n\n /* Nodes containing prunable subnodes. */\n+ case T_MergeAppend:\n+ {\n+ PlannedStmt *plannedstmt = context->plannedstmt;\n+ List *rtable = plannedstmt->rtable;\n+ ParamListInfo params = context->params;\n+ PartitionPruneInfo *pruneinfo;\n+ Bitmapset *validsubplans;\n+ Bitmapset *parentrelids;\n\n...\n if (pruneinfo && pruneinfo->contains_init_steps)\n {\n int i;\n...\n return false;\n }\n }\n break;\n\nMost of the declarations need to be moved inside the if-block.\n\nAlso, initially, I was a bit concerned regarding this code block\ninside GetLockableRelations_worker(), what if (pruneinfo &&\npruneinfo->contains_init_steps) evaluated to false? After debugging I\nrealized that plan_tree_walker() will do the needful -- a bit of\ncomment would have helped.\n--\n\n+ case T_CustomScan:\n+ foreach(lc, ((CustomScan *) plan)->custom_plans)\n+ {\n+ if (walker((Plan *) lfirst(lc), context))\n+ return true;\n+ }\n+ break;\n\nWhy not plan_walk_members() call like other nodes?\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 6 Jan 2022 12:14:33 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 24, 2021 at 10:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> However, using an idea that Robert suggested to me off-list a little\n> while back, it seems possible to determine the set of partitions that\n> we can safely skip locking. The idea is to look at the \"initial\" or\n> \"pre-execution\" pruning instructions contained in a given Append or\n> MergeAppend node when AcquireExecutorLocks() is collecting the\n> relations to lock and consider relations from only those sub-nodes\n> that survive performing those instructions. I've attempted\n> implementing that idea in the attached patch.\n\nHmm. The first question that occurs to me is whether this is fully safe.\n\nCurrently, AcquireExecutorLocks calls LockRelationOid for every\nrelation involved in the query. That means we will probably lock at\nleast one relation on which we previously had no lock and thus\nAcceptInvalidationMessages(). That will end up marking the query as no\nlonger valid and CheckCachedPlan() will realize this and tell the\ncaller to replan. In the corner case where we already hold all the\nrequired locks, we will not accept invalidation messages at this\npoint, but must have done so after acquiring the last of the locks\nrequired, and if that didn't mark the plan invalid, it can't be\ninvalid now either. Either way, everything is fine.\n\nWith the proposed patch, we might never lock some of the relations\ninvolved in the query. Therefore, if one of those relations has been\nmodified in some way that would invalidate the plan, we will\npotentially fail to discover this, and will use the plan anyway. For\ninstance, suppose there's one particular partition that has an extra\nindex and the plan involves an Index Scan using that index. Now\nsuppose that the scan of the partition in question is pruned, but\nmeanwhile, the index has been dropped. Now we're running a plan that\nscans a nonexistent index. Admittedly, we're not running that part of\nthe plan. But is that enough for this to be safe? There are things\n(like EXPLAIN or auto_explain) that we might try to do even on a part\nof the plan tree that we don't try to run. Those things might break,\nbecause for example we won't be able to look up the name of an index\nin the catalogs for EXPLAIN output if the index is gone.\n\nThis is just a relatively simple example and I think there are\nprobably a bunch of others. There are a lot of kinds of DDL that could\nbe performed on a partition that gets pruned away: DROP INDEX is just\none example. The point is that to my knowledge we have no existing\ncase where we try to use a plan that might be only partly valid, so if\nwe introduce one, there's some risk there. I thought for a while, too,\nabout whether changes to some object in a part of the plan that we're\nnot executing could break things for the rest of the plan even if we\nnever do anything with the plan but execute it. I can't quite see any\nactual hazard. For example, I thought about whether we might try to\nget the tuple descriptor for the pruned-away object and get a\ndifferent tuple descriptor than we were expecting. I think we can't,\nbecause (1) the pruned object has to be a partition, and tuple\ndescriptors have to match throughout the partitioning hierarchy,\nexcept for column ordering, which currently can't be changed\nafter-the-fact and (2) IIRC, the tuple descriptor is stored in the\nplan and not reconstructed at runtime and (3) if we don't end up\nopening the relation because it's pruned, then we certainly can't do\nanything with its tuple descriptor. But it might be worth giving more\nthought to the question of whether there's any other way we could be\ndepending on the details of an object that ended up getting pruned.\n\n> Note that \"initial\" pruning steps are now performed twice when\n> executing generic plans: once in AcquireExecutorLocks() to find\n> partitions to be locked, and a 2nd time in ExecInit[Merge]Append() to\n> determine the set of partition sub-nodes to be initialized for\n> execution, though I wasn't able to come up with a good idea to avoid\n> this duplication.\n\nI think this is something that will need to be fixed somehow. Apart\nfrom the CPU cost, it's scary to imagine that the set of nodes on\nwhich we acquired locks might be different from the set of nodes that\nwe initialize. If we do the same computation twice, there must be some\nnon-zero probability of getting a different answer the second time,\neven if the circumstances under which it would actually happen are\nremote. Consider, for example, a function that is labeled IMMUTABLE\nbut is really VOLATILE. Now maybe you can get the system to lock one\nset of partitions and then initialize a different set of partitions. I\ndon't think we want to try to reason about what consequences that\nmight have and prove that somehow it's going to be OK; I think we want\nto nail the door shut very tightly to make sure that it can't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Jan 2022 11:22:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Thanks for taking the time to look at this.\n\nOn Wed, Jan 12, 2022 at 1:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Dec 24, 2021 at 10:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > However, using an idea that Robert suggested to me off-list a little\n> > while back, it seems possible to determine the set of partitions that\n> > we can safely skip locking. The idea is to look at the \"initial\" or\n> > \"pre-execution\" pruning instructions contained in a given Append or\n> > MergeAppend node when AcquireExecutorLocks() is collecting the\n> > relations to lock and consider relations from only those sub-nodes\n> > that survive performing those instructions. I've attempted\n> > implementing that idea in the attached patch.\n>\n> Hmm. The first question that occurs to me is whether this is fully safe.\n>\n> Currently, AcquireExecutorLocks calls LockRelationOid for every\n> relation involved in the query. That means we will probably lock at\n> least one relation on which we previously had no lock and thus\n> AcceptInvalidationMessages(). That will end up marking the query as no\n> longer valid and CheckCachedPlan() will realize this and tell the\n> caller to replan. In the corner case where we already hold all the\n> required locks, we will not accept invalidation messages at this\n> point, but must have done so after acquiring the last of the locks\n> required, and if that didn't mark the plan invalid, it can't be\n> invalid now either. Either way, everything is fine.\n>\n> With the proposed patch, we might never lock some of the relations\n> involved in the query. Therefore, if one of those relations has been\n> modified in some way that would invalidate the plan, we will\n> potentially fail to discover this, and will use the plan anyway. For\n> instance, suppose there's one particular partition that has an extra\n> index and the plan involves an Index Scan using that index. Now\n> suppose that the scan of the partition in question is pruned, but\n> meanwhile, the index has been dropped. Now we're running a plan that\n> scans a nonexistent index. Admittedly, we're not running that part of\n> the plan. But is that enough for this to be safe? There are things\n> (like EXPLAIN or auto_explain) that we might try to do even on a part\n> of the plan tree that we don't try to run. Those things might break,\n> because for example we won't be able to look up the name of an index\n> in the catalogs for EXPLAIN output if the index is gone.\n>\n> This is just a relatively simple example and I think there are\n> probably a bunch of others. There are a lot of kinds of DDL that could\n> be performed on a partition that gets pruned away: DROP INDEX is just\n> one example. The point is that to my knowledge we have no existing\n> case where we try to use a plan that might be only partly valid, so if\n> we introduce one, there's some risk there. I thought for a while, too,\n> about whether changes to some object in a part of the plan that we're\n> not executing could break things for the rest of the plan even if we\n> never do anything with the plan but execute it. I can't quite see any\n> actual hazard. For example, I thought about whether we might try to\n> get the tuple descriptor for the pruned-away object and get a\n> different tuple descriptor than we were expecting. I think we can't,\n> because (1) the pruned object has to be a partition, and tuple\n> descriptors have to match throughout the partitioning hierarchy,\n> except for column ordering, which currently can't be changed\n> after-the-fact and (2) IIRC, the tuple descriptor is stored in the\n> plan and not reconstructed at runtime and (3) if we don't end up\n> opening the relation because it's pruned, then we certainly can't do\n> anything with its tuple descriptor. But it might be worth giving more\n> thought to the question of whether there's any other way we could be\n> depending on the details of an object that ended up getting pruned.\n\nI have pondered on the possible hazards before writing the patch,\nmainly because the concerns about a previously discussed proposal were\nalong similar lines [1].\n\nIIUC, you're saying the plan tree is subject to inspection by non-core\ncode before ExecutorStart() has initialized a PlanState tree, which\nmust have discarded pruned portions of the plan tree. I wouldn't\nclaim to have scanned *all* of the core code that could possibly\naccess the invalidated portions of the plan tree, but from what I have\nseen, I couldn't find any site that does. An ExecutorStart_hook()\ngets to do that, but from what I can see it is expected to call\nstandard_ExecutorStart() before doing its thing and supposedly only\nlooks at the PlanState tree, which must be valid. Actually, EXPLAIN\nalso does ExecutorStart() before starting to look at the plan (the\nPlanState tree), so must not run into pruned plan tree nodes. All\nthat said, it does sound like wishful thinking to say that no problems\ncan possibly occur.\n\nAt first, I had tried to implement this such that the\nAppend/MergeAppend nodes are edited to record the result of initial\npruning, but it felt wrong to be munging the plan tree in plancache.c.\n\nOr, maybe this won't be a concern if performing ExecutorStart() is\nmade a part of CheckCachedPlan() somehow, which would then take locks\non the relation as the PlanState tree is built capturing any plan\ninvalidations, instead of AcquireExecutorLocks(). That does sound like\nan ambitious undertaking though.\n\n> > Note that \"initial\" pruning steps are now performed twice when\n> > executing generic plans: once in AcquireExecutorLocks() to find\n> > partitions to be locked, and a 2nd time in ExecInit[Merge]Append() to\n> > determine the set of partition sub-nodes to be initialized for\n> > execution, though I wasn't able to come up with a good idea to avoid\n> > this duplication.\n>\n> I think this is something that will need to be fixed somehow. Apart\n> from the CPU cost, it's scary to imagine that the set of nodes on\n> which we acquired locks might be different from the set of nodes that\n> we initialize. If we do the same computation twice, there must be some\n> non-zero probability of getting a different answer the second time,\n> even if the circumstances under which it would actually happen are\n> remote. Consider, for example, a function that is labeled IMMUTABLE\n> but is really VOLATILE. Now maybe you can get the system to lock one\n> set of partitions and then initialize a different set of partitions. I\n> don't think we want to try to reason about what consequences that\n> might have and prove that somehow it's going to be OK; I think we want\n> to nail the door shut very tightly to make sure that it can't.\n\nYeah, the premise of the patch is that \"initial\" pruning steps produce\nthe same result both times. I assumed that would be true because the\npruning steps are not allowed to contain any VOLATILE expressions.\nRegarding the possibility that IMMUTABLE labeling of functions may be\nincorrect, I haven't considered if the runtime pruning code can cope\nor whether it should try to. If such a case does occur in practice,\nthe bad outcome would be an Assert failure in\nExecGetRangeTableRelation() or using a partition unlocked in the\nnon-assert builds, the latter of which feels especially bad.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoZN-80143F8OhN8Cn5-uDae5miLYVwMapAuc%2B7%2BZ7pyNg%40mail.gmail.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 23:31:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Jan 12, 2022 at 9:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I have pondered on the possible hazards before writing the patch,\n> mainly because the concerns about a previously discussed proposal were\n> along similar lines [1].\n\nTrue. I think that the hazards are narrower with this proposal,\nbecause if you *delay* locking a partition that you eventually need,\nthen you might end up trying to actually execute a portion of the plan\nthat's no longer valid. That seems like hopelessly bad news. On the\nother hand, with this proposal, you skip locking altogether, but only\nfor parts of the plan that you don't plan to execute. That's still\nkind of scary, but not to nearly the same degree.\n\n> IIUC, you're saying the plan tree is subject to inspection by non-core\n> code before ExecutorStart() has initialized a PlanState tree, which\n> must have discarded pruned portions of the plan tree. I wouldn't\n> claim to have scanned *all* of the core code that could possibly\n> access the invalidated portions of the plan tree, but from what I have\n> seen, I couldn't find any site that does. An ExecutorStart_hook()\n> gets to do that, but from what I can see it is expected to call\n> standard_ExecutorStart() before doing its thing and supposedly only\n> looks at the PlanState tree, which must be valid. Actually, EXPLAIN\n> also does ExecutorStart() before starting to look at the plan (the\n> PlanState tree), so must not run into pruned plan tree nodes. All\n> that said, it does sound like wishful thinking to say that no problems\n> can possibly occur.\n\nYeah. I don't think it's only non-core code we need to worry about\neither. What if I just do EXPLAIN ANALYZE on a prepared query that\nends up pruning away some stuff? IIRC, the pruned subplans are not\nshown, so we might escape disaster here, but FWIW if I'd committed\nthat code I would have pushed hard for showing those and saying \"(not\nexecuted)\" .... so it's not too crazy to imagine a world in which\nthings work that way.\n\n> At first, I had tried to implement this such that the\n> Append/MergeAppend nodes are edited to record the result of initial\n> pruning, but it felt wrong to be munging the plan tree in plancache.c.\n\nIt is. You can't munge the plan tree: it's required to be strictly\nread-only once generated. It can be serialized and deserialized for\ntransmission to workers, and it can be shared across executions.\n\n> Or, maybe this won't be a concern if performing ExecutorStart() is\n> made a part of CheckCachedPlan() somehow, which would then take locks\n> on the relation as the PlanState tree is built capturing any plan\n> invalidations, instead of AcquireExecutorLocks(). That does sound like\n> an ambitious undertaking though.\n\nOn the surface that would seem to involve abstraction violations, but\nmaybe that could be finessed somehow. The plancache shouldn't know too\nmuch about what the executor is going to do with the plan, but it\ncould ask the executor to perform a step that has been designed for\nuse by the plancache. I guess the core problem here is how to pass\naround information that is node-specific before we've stood up the\nexecutor state tree. Maybe the executor could have a function that\ndoes the pruning and returns some kind of array of results that can be\nused both to decide what to lock and also what to consider as pruned\nat the start of execution. (I'm hand-waving about the details because\nI don't know.)\n\n> Yeah, the premise of the patch is that \"initial\" pruning steps produce\n> the same result both times. I assumed that would be true because the\n> pruning steps are not allowed to contain any VOLATILE expressions.\n> Regarding the possibility that IMMUTABLE labeling of functions may be\n> incorrect, I haven't considered if the runtime pruning code can cope\n> or whether it should try to. If such a case does occur in practice,\n> the bad outcome would be an Assert failure in\n> ExecGetRangeTableRelation() or using a partition unlocked in the\n> non-assert builds, the latter of which feels especially bad.\n\nRight. I think it's OK for a query to produce wrong answers under\nthose kinds of conditions - the user has broken everything and gets to\nkeep all the pieces - but doing stuff that might violate fundamental\nassumptions of the system like \"relations can only be accessed when\nholding a lock on them\" feels quite bad. It's not a stretch to imagine\nthat failing to follow those invariants could take the whole system\ndown, which is clearly too severe a consequence for the user's failure\nto label things properly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 13:20:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Jan 6, 2022 at 3:45 PM Amul Sul <sulamul@gmail.com> wrote:\n> Here are few comments for v1 patch:\n\nThanks Amul. I'm thinking about Robert's latest comments addressing\nwhich may need some rethinking of this whole design, but I decided to\npost a v2 taking care of your comments.\n\n> + /* Caller error if we get here without contains_init_steps */\n> + Assert(pruneinfo->contains_init_steps);\n>\n> - prunedata = prunestate->partprunedata[i];\n> - pprune = &prunedata->partrelprunedata[0];\n>\n> - /* Perform pruning without using PARAM_EXEC Params */\n> - find_matching_subplans_recurse(prunedata, pprune, true, &result);\n> + if (parentrelids)\n> + *parentrelids = NULL;\n>\n> You got two blank lines after Assert.\n\nFixed.\n\n> --\n>\n> + /* Set up EState if not in the executor proper. */\n> + if (estate == NULL)\n> + {\n> + estate = CreateExecutorState();\n> + estate->es_param_list_info = params;\n> + free_estate = true;\n> }\n>\n> ... [Skip]\n>\n> + if (free_estate)\n> + {\n> + FreeExecutorState(estate);\n> + estate = NULL;\n> }\n>\n> I think this work should be left to the caller.\n\nDone. Also see below...\n\n> /*\n> * Stuff that follows matches exactly what ExecCreatePartitionPruneState()\n> * does, except we don't need a PartitionPruneState here, so don't call\n> * that function.\n> *\n> * XXX some refactoring might be good.\n> */\n>\n> +1, while doing it would be nice if foreach_current_index() is used\n> instead of the i & j sequence in the respective foreach() block, IMO.\n\nActually, I rewrote this part quite significantly so that most of the\ncode remains in its existing place. I decided to let\nGetLockableRelations_walker() create a PartitionPruneState and pass\nthat to ExecFindInitialMatchingSubPlans() that is now left more or\nless as is. Instead, ExecCreatePartitionPruneState() is changed to be\ncallable from outside the executor.\n\nThe temporary EState is no longer necessary. ExprContext,\nPartitionDirectory, etc. are now managed in the caller,\nGetLockableRelations_walker().\n\n> --\n>\n> + while ((i = bms_next_member(validsubplans, i)) >= 0)\n> + {\n> + Plan *subplan = list_nth(subplans, i);\n> +\n> + context->relations =\n> + bms_add_members(context->relations,\n> + get_plan_scanrelids(subplan));\n> + }\n>\n> I think instead of get_plan_scanrelids() the\n> GetLockableRelations_worker() can be used; if so, then no need to add\n> get_plan_scanrelids() function.\n\nYou're right, done.\n\n> --\n>\n> /* Nodes containing prunable subnodes. */\n> + case T_MergeAppend:\n> + {\n> + PlannedStmt *plannedstmt = context->plannedstmt;\n> + List *rtable = plannedstmt->rtable;\n> + ParamListInfo params = context->params;\n> + PartitionPruneInfo *pruneinfo;\n> + Bitmapset *validsubplans;\n> + Bitmapset *parentrelids;\n>\n> ...\n> if (pruneinfo && pruneinfo->contains_init_steps)\n> {\n> int i;\n> ...\n> return false;\n> }\n> }\n> break;\n>\n> Most of the declarations need to be moved inside the if-block.\n\nDone.\n\n> Also, initially, I was a bit concerned regarding this code block\n> inside GetLockableRelations_worker(), what if (pruneinfo &&\n> pruneinfo->contains_init_steps) evaluated to false? After debugging I\n> realized that plan_tree_walker() will do the needful -- a bit of\n> comment would have helped.\n\nYou're right. Added a dummy else {} block with just the comment saying so.\n\n> + case T_CustomScan:\n> + foreach(lc, ((CustomScan *) plan)->custom_plans)\n> + {\n> + if (walker((Plan *) lfirst(lc), context))\n> + return true;\n> + }\n> + break;\n>\n> Why not plan_walk_members() call like other nodes?\n\nMakes sense, done.\n\nAgain, most/all of this patch might need to be thrown away, but here\nit is anyway.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 14 Jan 2022 23:10:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 14, 2022 at 11:10 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jan 6, 2022 at 3:45 PM Amul Sul <sulamul@gmail.com> wrote:\n> > Here are few comments for v1 patch:\n>\n> Thanks Amul. I'm thinking about Robert's latest comments addressing\n> which may need some rethinking of this whole design, but I decided to\n> post a v2 taking care of your comments.\n\ncfbot tells me there is an unused variable warning, which is fixed in\nthe attached v3.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Jan 2022 10:32:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, 11 Jan 2022 at 16:22, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> This is just a relatively simple example and I think there are\n> probably a bunch of others. There are a lot of kinds of DDL that could\n> be performed on a partition that gets pruned away: DROP INDEX is just\n> one example.\n\nI haven't followed this in any detail, but this patch and its goal of\nreducing the O(N) drag effect on partition execution time is very\nimportant. Locking a long list of objects that then get pruned is very\nwasteful, as the results show.\n\nIdeally, we want an O(1) algorithm for single partition access and DDL\nis rare. So perhaps that is the starting point for a safe design -\ninvent a single lock or cache that allows us to check if the partition\nhierarchy has changed in any way, and if so, replan, if not, skip\nlocks.\n\nPlease excuse me if this idea falls short, if so, please just note my\ncomment about how important this is. Thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 18 Jan 2022 07:44:48 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi Simon,\n\nOn Tue, Jan 18, 2022 at 4:44 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> On Tue, 11 Jan 2022 at 16:22, Robert Haas <robertmhaas@gmail.com> wrote:\n> > This is just a relatively simple example and I think there are\n> > probably a bunch of others. There are a lot of kinds of DDL that could\n> > be performed on a partition that gets pruned away: DROP INDEX is just\n> > one example.\n>\n> I haven't followed this in any detail, but this patch and its goal of\n> reducing the O(N) drag effect on partition execution time is very\n> important. Locking a long list of objects that then get pruned is very\n> wasteful, as the results show.\n>\n> Ideally, we want an O(1) algorithm for single partition access and DDL\n> is rare. So perhaps that is the starting point for a safe design -\n> invent a single lock or cache that allows us to check if the partition\n> hierarchy has changed in any way, and if so, replan, if not, skip\n> locks.\n\nRearchitecting partition locking to be O(1) seems like a project of\nnon-trivial complexity as Robert mentioned in a related email thread\ncouple of years ago:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmoYbtm1uuDne3rRp_uNA2RFiBwXX1ngj3RSLxOfc3oS7cQ%40mail.gmail.com\n\nPursuing that kind of a project would perhaps have been more\nworthwhile if the locking issue had affected more than just this\nparticular case, that is, the case of running prepared statements over\npartitioned tables using generic plans. Addressing this by\nrearchitecting run-time pruning (and plancache to some degree) seemed\nlike it might lead to this getting fixed in a bounded timeframe. I\nadmit that the concerns that Robert has raised about the patch make me\nwant to reconsider that position, though maybe it's too soon to\nconclude.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jan 2022 17:10:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, 18 Jan 2022 at 08:10, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Simon,\n>\n> On Tue, Jan 18, 2022 at 4:44 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > On Tue, 11 Jan 2022 at 16:22, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > This is just a relatively simple example and I think there are\n> > > probably a bunch of others. There are a lot of kinds of DDL that could\n> > > be performed on a partition that gets pruned away: DROP INDEX is just\n> > > one example.\n> >\n> > I haven't followed this in any detail, but this patch and its goal of\n> > reducing the O(N) drag effect on partition execution time is very\n> > important. Locking a long list of objects that then get pruned is very\n> > wasteful, as the results show.\n> >\n> > Ideally, we want an O(1) algorithm for single partition access and DDL\n> > is rare. So perhaps that is the starting point for a safe design -\n> > invent a single lock or cache that allows us to check if the partition\n> > hierarchy has changed in any way, and if so, replan, if not, skip\n> > locks.\n>\n> Rearchitecting partition locking to be O(1) seems like a project of\n> non-trivial complexity as Robert mentioned in a related email thread\n> couple of years ago:\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoYbtm1uuDne3rRp_uNA2RFiBwXX1ngj3RSLxOfc3oS7cQ%40mail.gmail.com\n\nI agree, completely redesigning locking is a major project. But that\nisn't what I suggested, which was to find an O(1) algorithm to solve\nthe safety issue. I'm sure there is an easy way to check one lock,\nmaybe a new one/new kind, rather than N.\n\nWhy does the safety issue exist? Why is it important to be able to\nconcurrently access parts of the hierarchy with DDL? Those are not\ncritical points.\n\nIf we asked them, most users would trade a 10x performance gain for\nsome restrictions on DDL. If anyone cares, make it an option, but most\npeople will use it.\n\nMaybe force all DDL, or just DDL that would cause safety issues, to\nupdate a hierarchy version number, so queries can tell whether they\nneed to replan. Don't know, just looking for an O(1) solution.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 18 Jan 2022 10:28:20 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Jan 18, 2022 at 3:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Pursuing that kind of a project would perhaps have been more\n> worthwhile if the locking issue had affected more than just this\n> particular case, that is, the case of running prepared statements over\n> partitioned tables using generic plans. Addressing this by\n> rearchitecting run-time pruning (and plancache to some degree) seemed\n> like it might lead to this getting fixed in a bounded timeframe. I\n> admit that the concerns that Robert has raised about the patch make me\n> want to reconsider that position, though maybe it's too soon to\n> conclude.\n\nI wasn't trying to say that your approach was dead in the water. It\ndoes create a situation that can't happen today, and such things are\nscary and need careful thought. But redesigning the locking mechanism\nwould need careful thought, too ... maybe even more of it than sorting\nthis out.\n\nI do also agree with Simon that this is an important problem to which\nwe need to find some solution.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jan 2022 09:53:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Jan 18, 2022 at 7:28 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> On Tue, 18 Jan 2022 at 08:10, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jan 18, 2022 at 4:44 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > > I haven't followed this in any detail, but this patch and its goal of\n> > > reducing the O(N) drag effect on partition execution time is very\n> > > important. Locking a long list of objects that then get pruned is very\n> > > wasteful, as the results show.\n> > >\n> > > Ideally, we want an O(1) algorithm for single partition access and DDL\n> > > is rare. So perhaps that is the starting point for a safe design -\n> > > invent a single lock or cache that allows us to check if the partition\n> > > hierarchy has changed in any way, and if so, replan, if not, skip\n> > > locks.\n> >\n> > Rearchitecting partition locking to be O(1) seems like a project of\n> > non-trivial complexity as Robert mentioned in a related email thread\n> > couple of years ago:\n> >\n> > https://www.postgresql.org/message-id/CA%2BTgmoYbtm1uuDne3rRp_uNA2RFiBwXX1ngj3RSLxOfc3oS7cQ%40mail.gmail.com\n>\n> I agree, completely redesigning locking is a major project. But that\n> isn't what I suggested, which was to find an O(1) algorithm to solve\n> the safety issue. I'm sure there is an easy way to check one lock,\n> maybe a new one/new kind, rather than N.\n\nI misread your email then, sorry.\n\n> Why does the safety issue exist? Why is it important to be able to\n> concurrently access parts of the hierarchy with DDL? Those are not\n> critical points.\n>\n> If we asked them, most users would trade a 10x performance gain for\n> some restrictions on DDL. If anyone cares, make it an option, but most\n> people will use it.\n>\n> Maybe force all DDL, or just DDL that would cause safety issues, to\n> update a hierarchy version number, so queries can tell whether they\n> need to replan. Don't know, just looking for an O(1) solution.\n\nYeah, it would be great if it would suffice to take a single lock on\nthe partitioned table mentioned in the query, rather than on all\nelements of the partition tree added to the plan. AFAICS, ways to get\nthat are 1) Prevent modifying non-root partition tree elements, 2)\nMake it so that locking a partitioned table becomes a proxy for having\nlocked all of its descendents, 3) Invent a Plan representation for\nscanning partitioned tables such that adding the descendent tables\nthat survive plan-time pruning to the plan doesn't require locking\nthem too. IIUC, you've mentioned 1 and 2. I think I've seen 3\nmentioned in the past discussions on this topic, but I guess the\nresearch on whether that's doable has never been done.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:30:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Jan 18, 2022 at 11:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 18, 2022 at 3:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Pursuing that kind of a project would perhaps have been more\n> > worthwhile if the locking issue had affected more than just this\n> > particular case, that is, the case of running prepared statements over\n> > partitioned tables using generic plans. Addressing this by\n> > rearchitecting run-time pruning (and plancache to some degree) seemed\n> > like it might lead to this getting fixed in a bounded timeframe. I\n> > admit that the concerns that Robert has raised about the patch make me\n> > want to reconsider that position, though maybe it's too soon to\n> > conclude.\n>\n> I wasn't trying to say that your approach was dead in the water. It\n> does create a situation that can't happen today, and such things are\n> scary and need careful thought. But redesigning the locking mechanism\n> would need careful thought, too ... maybe even more of it than sorting\n> this out.\n\nYes, agreed.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:31:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, 19 Jan 2022 at 08:31, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> > Maybe force all DDL, or just DDL that would cause safety issues, to\n> > update a hierarchy version number, so queries can tell whether they\n> > need to replan. Don't know, just looking for an O(1) solution.\n>\n> Yeah, it would be great if it would suffice to take a single lock on\n> the partitioned table mentioned in the query, rather than on all\n> elements of the partition tree added to the plan. AFAICS, ways to get\n> that are 1) Prevent modifying non-root partition tree elements,\n\nCan we reuse the concept of Strong/Weak locking here?\n\nWhen a DDL request is in progress (for that partitioned table), take\nall required locks for safety. When a DDL request is not in progress,\ntake minimal locks knowing it is safe.\n\nWe can take a single PartitionTreeModificationLock, nowait to prove\nthat we do not need all locks. DDL would request the lock in exclusive\nmode. (Other mechanisms possible).\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 19 Jan 2022 11:16:44 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Jan 13, 2022 at 3:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jan 12, 2022 at 9:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Or, maybe this won't be a concern if performing ExecutorStart() is\n> > made a part of CheckCachedPlan() somehow, which would then take locks\n> > on the relation as the PlanState tree is built capturing any plan\n> > invalidations, instead of AcquireExecutorLocks(). That does sound like\n> > an ambitious undertaking though.\n>\n> On the surface that would seem to involve abstraction violations, but\n> maybe that could be finessed somehow. The plancache shouldn't know too\n> much about what the executor is going to do with the plan, but it\n> could ask the executor to perform a step that has been designed for\n> use by the plancache. I guess the core problem here is how to pass\n> around information that is node-specific before we've stood up the\n> executor state tree. Maybe the executor could have a function that\n> does the pruning and returns some kind of array of results that can be\n> used both to decide what to lock and also what to consider as pruned\n> at the start of execution. (I'm hand-waving about the details because\n> I don't know.)\n\nThe attached patch implements this idea. Sorry for the delay in\ngetting this out and thanks to Robert for the off-list discussions on\nthis.\n\nSo the new executor \"step\" you mention is the function ExecutorPrep in\nthe patch, which calls a recursive function ExecPrepNode on the plan\ntree's top node, much as ExecutorStart calls (via InitPlan)\nExecInitNode to construct a PlanState tree for actual execution\nparalleling the plan tree.\n\nFor now, ExecutorPrep() / ExecPrepNode() does mainly two things if and\nas it walks the plan tree: 1) Extract the RT indexes of RTE_RELATION\nentries and add them to a bitmapset in the result struct, 2) If the\nnode contains a PartitionPruneInfo, perform its \"initial pruning\nsteps\" and store the result of doing so in a per-plan-node node called\nPlanPrepOutput. The bitmapset and the array containing per-plan-node\nPlanPrepOutput nodes are returned in a node called ExecPrepOutput,\nwhich is the result of ExecutorPrep, to its calling module (say,\nplancache.c), which, after it's done using that information, must pass\nit forward to subsequent execution steps. That is done by passing it,\nvia the module's callers, to CreateQueryDesc() which remembers the\nExecPrepOutput in QueryDesc that is eventually passed to\nExecutorStart().\n\nA bunch of other details are mentioned in the patch's commit message,\nwhich I'm pasting below for anyone reading to spot any obvious flaws\n(no-go's) of this approach:\n\n Invent a new executor \"prep\" phase\n\n The new phase, implemented by execMain.c:ExecutorPrep() and its\n recursive underling execProcnode.c:ExecPrepNode(), takes a query's\n PlannedStmt and processes the plan tree contained in it to produce\n a ExecPrepOutput node as result.\n\n As the plan tree is walked, each node must add the RT index(es) of\n any relation(s) that it directly manipulates to a bitmapset member of\n ExecPrepOutput (for example, an IndexScan node must add the Scan's\n scanrelid). Also, each node may want to make a PlanPrepOutput node\n containing additional information that may be of interest to the\n calling module or to the later execution phases, if the node can\n provide one (for example, an Append node may perform initial pruning\n and add a set of \"initially valid subplans\" to the PlanPrepOutput).\n The PlanPrepOutput nodess of all the plan nodes are added to an array\n in the ExecPrepOutput, which is indexed using the individual nodes'\n plan_node_id; a NULL is stored in the array slots of nodes that\n don't have anything interesting to add to the PlanPrepOutput.\n\n The ExecPrepOutput thus produced is passed to CreateQueryDesc()\n and subsequently to ExecutorStart() via QueryDesc, which then makes\n it available to the executor routines via the query's EState.\n\n The main goal of adding this new phase is, for now, to allow cached\n cached generic plans containing scans of partitioned tables using\n Append/MergeAppend to be executed more efficiently by the prep phase\n doing any initial pruning, instead of deferring that to\n ExecutorStart(). That may allow AcquireExecutorLocks() on the plan\n to lock only only the minimal set of relations/partitions, that is\n those whose subplans survive the initial pruning.\n\n Implementation notes:\n\n * To allow initial pruning to be done as part of the pre-execution\n prep phase as opposed to as part of ExecutorStart(), this refactors\n ExecCreatePartitionPruneState() and ExecFindInitialMatchingSubPlans()\n to pass the information needed to do initial pruning directly as\n parameters instead of getting that from the EState and the PlanState\n of the parent Append/MergeAppend, both of which would not be\n available in ExecutorPrep(). Another, sort of non-essential-to-this-\n goal, refactoring this does is moving the partition pruning\n initialization stanzas in ExecInitAppend() and ExecInitMergeAppend()\n both of which contain the same cod into its own function\n ExecInitPartitionPruning().\n\n * To pass the ExecPrepOutput(s) created by the plancache module's\n invocation of ExecutorPrep() to the callers of the module, which in\n turn would pass them down to ExecutorStart(), CachedPlan gets a new\n List field that stores those ExecPrepOutputs, containing one element\n for each PlannedStmt also contained in the CachedPlan. The new list\n is stored in a child context of the context containing the\n PlannedStmts, though unlike the latter, it is reset on every\n invocation of CheckCachedPlan(), which in turn calls ExecutorPrep()\n with a new set of bound Params.\n\n * AcquireExecutorLocks() is now made to loop over a bitmapset of RT\n indexes, those of relations returned in ExecPrepOutput, instead of\n over the whole range table. With initial pruning that is also done\n as part of ExcecutorPrep(), only relations from non-pruned nodes of\n the plan tree would get locked as a result of this new arrangement.\n\n * PlannedStmt gets a new field usesPrepExecPruning that indicates\n whether any of the nodes of the plan tree contain \"initial\" (or\n \"pre-execution\") pruning steps, which saves ExecutorPrep() the\n trouble of walking the plan tree only to find out whether that's\n the case.\n\n * PartitionPruneInfo nodes now explicitly stores whether the steps\n contained in any of the individual PartitionedRelPruneInfos embedded\n in it contain initial pruning steps (those that can be performed\n during ExecutorPrep) and execution pruning steps (those that can only\n be performed during ExecutorRun), as flags contains_initial_steps and\n contains_exec_steps, respectively. In fact, the aforementioned\n PlannedStmt field's value is a logical OR of the values of the former\n across all PartitionPruneInfo nodes embedded in the plan tree.\n\n * PlannedStmt also gets a bitmapset field to store the RT indexes of\n all relation RTEs referenced in the query that is populated when\n contructing the flat range table in setrefs.c, which effectively\n contains all the relations that the planner must have locked. In the\n case of a cached plan, AcquireExecutorLocks() must lock all of those\n relations, except those whose subnodes get pruned as result of\n ExecutorPrep().\n\n * PlannedStmt gets yet another field numPlanNodes that records the\n highest plan_node_id assigned to any of the node contained in the\n tree, which serves as the size to use when allocating the\n PlanPrepOutput array.\n\nMaybe this should be more than one patch? Say:\n\n0001 to add ExecutorPrep and the boilerplate,\n0002 to teach plancache.c to use the new facility\n\nThoughts?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Feb 2022 17:13:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Feb 10, 2022 at 3:14 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Maybe this should be more than one patch? Say:\n>\n> 0001 to add ExecutorPrep and the boilerplate,\n> 0002 to teach plancache.c to use the new facility\n\nCould be, not sure. I agree that if it's possible to split this in a\nmeaningful way, it would facilitate review. I notice that there is\nsome straight code movement e.g. the creation of\nExecPartitionPruneFixSubPlanIndexes. It would be best, I think, to do\npure code movement in a preparatory patch so that the main patch is\njust adding the new stuff we need and not moving stuff around.\n\nDavid Rowley recently proposed a patch for some parallel-safety\ndebugging cross checks which added a plan tree walker. I'm not sure\nwhether he's going to press that patch forward to commit, but I think\nwe should get something like that into the tree and start using it,\nrather than adding more bespoke code. Maybe you/we should steal that\npart of his patch and commit it separately. What I'm imagining is that\nplan_tree_walker() would know which nodes have subnodes and how to\nrecurse over the tree structure, and you'd have a walker function to\nuse with it that would know which executor nodes have ExecPrep\nfunctions and call them, and just do nothing for the others. That\nwould spare you adding stub functions for nodes that don't need to do\nanything, or don't need to do anything other than recurse. Admittedly\nit would look a bit different from the existing executor phases, but\nI'd argue that it's a better coding model.\n\nActually, you might've had this in the patch at some point, because\nyou have a declaration for plan_tree_walker but no implementation. I\nguess one thing that's a bit awkward about this idea is that in some\ncases you want to recurse to some subnodes but not other subnodes. But\nmaybe it would work to put the recursion in the walker function in\nthat case, and then just return true; but if you want to walk all\nchildren, return false.\n\n+ bool contains_init_steps;\n+ bool contains_exec_steps;\n\ns/steps/pruning/? maybe with contains -> needs or performs or requires as well?\n\n+ * Returned information includes the set of RT indexes of relations referenced\n+ * in the plan, and a PlanPrepOutput node for each node in the planTree if the\n+ * node type supports producing one.\n\nAren't all RT indexes referenced in the plan?\n\n+ * This may lock relations whose information may be used to produce the\n+ * PlanPrepOutput nodes. For example, a partitioned table before perusing its\n+ * PartitionPruneInfo contained in an Append node to do the pruning the result\n+ * of which is used to populate the Append node's PlanPrepOutput.\n\n\"may lock\" feels awfully fuzzy to me. How am I supposed to rely on\nsomething that \"may\" happen? And don't we need to have tight logic\naround locking, with specific guarantees about what is locked at which\npoints in the code and what is not?\n\n+ * At least one of 'planstate' or 'econtext' must be passed to be able to\n+ * successfully evaluate any non-Const expressions contained in the\n+ * steps.\n\nThis also seems fuzzy. If I'm thinking of calling this function, I\ndon't know how I'd know whether this criterion is met.\n\nI don't love PlanPrepOutput the way you have it. I think one of the\nbasic design issues for this patch is: should we think of the prep\nphase as specifically pruning, or is it general prep and pruning is\nthe first thing for which we're going to use it? If it's really a\npre-pruning phase, we could name it that way instead of calling it\n\"prep\". If it's really a general prep phase, then why does\nPlanPrepOutput contain initially_valid_subnodes as a field? One could\nimagine letting each prep function decide what kind of prep node it\nwould like to return, with partition pruning being just one of the\noptions. But is that a useful generalization of the basic concept, or\njust pretending that a special-purpose mechanism is more general than\nit really is?\n\n+ return CreateQueryDesc(pstmt, NULL, /* XXX pass ExecPrepOutput too? */\n\nIt seems to me that we should do what the XXX suggests. It doesn't\nseem nice if the parallel workers could theoretically decide to prune\na different set of nodes than the leader.\n\n+ * known at executor startup (excludeing expressions containing\n\nExtra e.\n\n+ * into subplan indexes, is also returned for use during subsquent\n\nMissing e.\n\nSomewhere, we're going to need to document the idea that this may\npermit us to execute a plan that isn't actually fully valid, but that\nwe expect to survive because we'll never do anything with the parts of\nit that aren't. Maybe that should be added to the executor README, or\nmaybe there's some better place, but I don't think that should remain\nsomething that's just implicit.\n\nThis is not a full review, just some initial thoughts looking through this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 17:01:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 17:13:52 +0900, Amit Langote wrote:\n> The attached patch implements this idea. Sorry for the delay in\n> getting this out and thanks to Robert for the off-list discussions on\n> this.\n\nI did not follow this thread at all. And I only skimmed the patch. So I'm\nprobably wrong.\n\nI'm a wary of this increasing executor overhead even in cases it won't\nhelp. Without this patch, for simple queries, I see small allocations\nnoticeably in profiles. This adds a bunch more, even if\n!context->stmt->usesPreExecPruning:\n\n- makeNode(ExecPrepContext)\n- makeNode(ExecPrepOutput)\n- palloc0(sizeof(PlanPrepOutput *) * result->numPlanNodes)\n- stmt_execprep_list = lappend(stmt_execprep_list, execprep);\n- AllocSetContextCreate(CurrentMemoryContext,\n \"CachedPlan execprep list\", ...\n- ...\n\nThat's a lot of extra for something that's already a bottleneck.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 17:29:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "(just catching up on this thread)\n\nOn Thu, 13 Jan 2022 at 07:20, Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah. I don't think it's only non-core code we need to worry about\n> either. What if I just do EXPLAIN ANALYZE on a prepared query that\n> ends up pruning away some stuff? IIRC, the pruned subplans are not\n> shown, so we might escape disaster here, but FWIW if I'd committed\n> that code I would have pushed hard for showing those and saying \"(not\n> executed)\" .... so it's not too crazy to imagine a world in which\n> things work that way.\n\nFWIW, that would remove the whole point in init run-time pruning. The\nreason I made two phases of run-time pruning was so that we could get\naway from having the init plan overhead of nodes we'll never need to\nscan. If we wanted to show the (never executed) scans in EXPLAIN then\nwe'd need to do the init plan part and allocate all that memory\nneedlessly.\n\nImagine a hash partitioned table on \"id\" with 1000 partitions. The user does:\n\nPREPARE q1 (INT) AS SELECT * FROM parttab WHERE id = $1;\n\nEXECUTE q1(123);\n\nAssuming a generic plan, if we didn't have init pruning then we have\nto build a plan containing the scans for all 1000 partitions. There's\nsignificant overhead to that compared to just locking the partitions,\nand initialising 1 scan.\n\nIf it worked this way then we'd be even further from Amit's goal of\nreducing the overhead of starting plan with run-time pruning nodes.\n\nI understood at the time it was just the EXPLAIN output that you had\nconcerns with. I thought that was just around the lack of any display\nof the condition we used for pruning.\n\nDavid\n\n\n", "msg_date": "Mon, 14 Feb 2022 10:55:16 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Sun, Feb 13, 2022 at 4:55 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> FWIW, that would remove the whole point in init run-time pruning. The\n> reason I made two phases of run-time pruning was so that we could get\n> away from having the init plan overhead of nodes we'll never need to\n> scan. If we wanted to show the (never executed) scans in EXPLAIN then\n> we'd need to do the init plan part and allocate all that memory\n> needlessly.\n\nInteresting. I didn't realize that was why it had ended up like this.\n\n> I understood at the time it was just the EXPLAIN output that you had\n> concerns with. I thought that was just around the lack of any display\n> of the condition we used for pruning.\n\nThat was part of it, but I did think it was surprising that we didn't\nprint anything at all about the nodes we pruned, too. Although we're\ntechnically iterating over the PlanState, from the user perspective it\nfeels like you're asking PostgreSQL to print out the plan - so it\nseems weird to have nodes in the Plan tree that are quietly omitted\nfrom the output. That said, perhaps in retrospect it's good that it\nended up as it did, since we'd have a lot of trouble printing anything\nsensible for a scan of a table that's since been dropped.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:17:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi Andres,\n\nOn Fri, Feb 11, 2022 at 10:29 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-10 17:13:52 +0900, Amit Langote wrote:\n> > The attached patch implements this idea. Sorry for the delay in\n> > getting this out and thanks to Robert for the off-list discussions on\n> > this.\n>\n> I did not follow this thread at all. And I only skimmed the patch. So I'm\n> probably wrong.\n\nThanks for your interest in this and sorry about the delay in replying\n(have been away due to illness).\n\n> I'm a wary of this increasing executor overhead even in cases it won't\n> help. Without this patch, for simple queries, I see small allocations\n> noticeably in profiles. This adds a bunch more, even if\n> !context->stmt->usesPreExecPruning:\n\nAh, if any new stuff added by the patch runs in\n!context->stmt->usesPreExecPruning paths, then it's just poor coding\non my part, which I'm now looking to fix. Maybe not all of it is\navoidable, but I think whatever isn't should be trivial...\n\n> - makeNode(ExecPrepContext)\n> - makeNode(ExecPrepOutput)\n> - palloc0(sizeof(PlanPrepOutput *) * result->numPlanNodes)\n> - stmt_execprep_list = lappend(stmt_execprep_list, execprep);\n> - AllocSetContextCreate(CurrentMemoryContext,\n> \"CachedPlan execprep list\", ...\n> - ...\n>\n> That's a lot of extra for something that's already a bottleneck.\n\nIf all these allocations are limited to the usesPreExecPruning path,\nIMO, they would amount to trivial overhead compared to what is going\nto be avoided -- locking say 1000 partitions when only 1 will be\nscanned. Although, maybe there's a way to code this to have even less\noverhead than what's in the patch now.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Feb 2022 15:04:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Feb 11, 2022 at 7:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Feb 10, 2022 at 3:14 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Maybe this should be more than one patch? Say:\n> >\n> > 0001 to add ExecutorPrep and the boilerplate,\n> > 0002 to teach plancache.c to use the new facility\n\nThanks for taking a look and sorry about the delay.\n\n> Could be, not sure. I agree that if it's possible to split this in a\n> meaningful way, it would facilitate review. I notice that there is\n> some straight code movement e.g. the creation of\n> ExecPartitionPruneFixSubPlanIndexes. It would be best, I think, to do\n> pure code movement in a preparatory patch so that the main patch is\n> just adding the new stuff we need and not moving stuff around.\n\nOkay, created 0001 for moving around the execution pruning code.\n\n> David Rowley recently proposed a patch for some parallel-safety\n> debugging cross checks which added a plan tree walker. I'm not sure\n> whether he's going to press that patch forward to commit, but I think\n> we should get something like that into the tree and start using it,\n> rather than adding more bespoke code. Maybe you/we should steal that\n> part of his patch and commit it separately.\n\nI looked at the thread you mentioned (I guess [1]), though it seems\nDavid's proposing a path_tree_walker(), so I guess only useful within\nthe planner and not here.\n\n> What I'm imagining is that\n> plan_tree_walker() would know which nodes have subnodes and how to\n> recurse over the tree structure, and you'd have a walker function to\n> use with it that would know which executor nodes have ExecPrep\n> functions and call them, and just do nothing for the others. That\n> would spare you adding stub functions for nodes that don't need to do\n> anything, or don't need to do anything other than recurse. Admittedly\n> it would look a bit different from the existing executor phases, but\n> I'd argue that it's a better coding model.\n>\n> Actually, you might've had this in the patch at some point, because\n> you have a declaration for plan_tree_walker but no implementation.\n\nRight, the previous patch indeed used a plan_tree_walker() for this\nand I think in a way you seem to think it should work.\n\nI do agree that plan_tree_walker() allows for a better implementation\nof the idea of this patch and may also be generally useful, so I've\ncreated a separate patch that adds it to nodeFuncs.c.\n\n> I guess one thing that's a bit awkward about this idea is that in some\n> cases you want to recurse to some subnodes but not other subnodes. But\n> maybe it would work to put the recursion in the walker function in\n> that case, and then just return true; but if you want to walk all\n> children, return false.\n\nRight, that's how I've made ExecPrepAppend() etc. do it.\n\n> + bool contains_init_steps;\n> + bool contains_exec_steps;\n>\n> s/steps/pruning/? maybe with contains -> needs or performs or requires as well?\n\nWent with: needs_{init|exec}_pruning\n\n> + * Returned information includes the set of RT indexes of relations referenced\n> + * in the plan, and a PlanPrepOutput node for each node in the planTree if the\n> + * node type supports producing one.\n>\n> Aren't all RT indexes referenced in the plan?\n\nAh yes. How about:\n\n * Returned information includes the set of RT indexes of relations that must\n * be locked to safely execute the plan,\n\n> + * This may lock relations whose information may be used to produce the\n> + * PlanPrepOutput nodes. For example, a partitioned table before perusing its\n> + * PartitionPruneInfo contained in an Append node to do the pruning the result\n> + * of which is used to populate the Append node's PlanPrepOutput.\n>\n> \"may lock\" feels awfully fuzzy to me. How am I supposed to rely on\n> something that \"may\" happen? And don't we need to have tight logic\n> around locking, with specific guarantees about what is locked at which\n> points in the code and what is not?\n\nAgree the wording was fuzzy. I've rewrote as:\n\n * This locks relations whose information is needed to produce the\n * PlanPrepOutput nodes. For example, a partitioned table before perusing its\n * PartitionedRelPruneInfo contained in an Append node to do the pruning, the\n * result of which is used to populate the Append node's PlanPrepOutput.\n\nBTW, I've added an Assert in ExecGetRangeTableRelation():\n\n /*\n * A cross-check that AcquireExecutorLocks() hasn't missed any relations\n * it must not have.\n */\n Assert(estate->es_execprep == NULL ||\n bms_is_member(rti, estate->es_execprep->relationRTIs));\n\nwhich IOW ensures that the actual execution of a plan only sees\nrelations that ExecutorPrep() would've told AcquireExecutorLocks() to\ntake a lock on.\n\n> + * At least one of 'planstate' or 'econtext' must be passed to be able to\n> + * successfully evaluate any non-Const expressions contained in the\n> + * steps.\n>\n> This also seems fuzzy. If I'm thinking of calling this function, I\n> don't know how I'd know whether this criterion is met.\n\nOK, I have removed this comment (which was on top of a static local\nfunction) in favor of adding some commentary on this in places where\nit belongs. For example, in ExecPrepDoInitialPruning():\n\n /*\n * We don't yet have a PlanState for the parent plan node, so must create\n * a standalone ExprContext to evaluate pruning expressions, equipped with\n * the information about the EXTERN parameters that the caller passed us.\n * Note that that's okay because the initial pruning steps does not\n * involve anything that requires the execution to have started.\n */\n econtext = CreateStandaloneExprContext();\n econtext->ecxt_param_list_info = params;\n prunestate = ExecCreatePartitionPruneState(NULL, pruneinfo,\n true, false,\n rtable, econtext,\n pdir, parentrelids);\n\n> I don't love PlanPrepOutput the way you have it. I think one of the\n> basic design issues for this patch is: should we think of the prep\n> phase as specifically pruning, or is it general prep and pruning is\n> the first thing for which we're going to use it? If it's really a\n> pre-pruning phase, we could name it that way instead of calling it\n> \"prep\". If it's really a general prep phase, then why does\n> PlanPrepOutput contain initially_valid_subnodes as a field? One could\n> imagine letting each prep function decide what kind of prep node it\n> would like to return, with partition pruning being just one of the\n> options. But is that a useful generalization of the basic concept, or\n> just pretending that a special-purpose mechanism is more general than\n> it really is?\n\nWhile it can feel like the latter TBH, I'm inclined to keep\nExecutorPrep generalized. What bothers me about about the\nalternative of calling the new phase something less generalized like\nExecutorDoInitPruning() is that that makes the somewhat elaborate API\nchanges needed for the phase's output to put into QueryDesc, through\nwhich it ultimately reaches the main executor, seem less worthwhile.\n\nI agree that PlanPrepOutput design needs to be likewise generalized,\nmaybe like you suggest -- using PlanInitPruningOutput, a child class\nof PlanPrepOutput, to return the prep output for plan nodes that\nsupport pruning.\n\nThoughts?\n\n> + return CreateQueryDesc(pstmt, NULL, /* XXX pass ExecPrepOutput too? */\n>\n> It seems to me that we should do what the XXX suggests. It doesn't\n> seem nice if the parallel workers could theoretically decide to prune\n> a different set of nodes than the leader.\n\nOK, will fix.\n\n> + * known at executor startup (excludeing expressions containing\n>\n> Extra e.\n>\n> + * into subplan indexes, is also returned for use during subsquent\n>\n> Missing e.\n\nWill fix.\n\n> Somewhere, we're going to need to document the idea that this may\n> permit us to execute a plan that isn't actually fully valid, but that\n> we expect to survive because we'll never do anything with the parts of\n> it that aren't. Maybe that should be added to the executor README, or\n> maybe there's some better place, but I don't think that should remain\n> something that's just implicit.\n\nAgreed. I'd added a description of the new prep phase to executor\nREADME, though the text didn't mention this particular bit. Will fix\nto mention it.\n\n> This is not a full review, just some initial thoughts looking through this.\n\nThanks again. Will post a new version soon after a bit more polishing.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/flat/b59605fecb20ba9ea94e70ab60098c237c870628.camel%40postgrespro.ru\n\n\n", "msg_date": "Mon, 7 Mar 2022 23:18:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Feb 11, 2022 at 7:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I don't love PlanPrepOutput the way you have it. I think one of the\n> > basic design issues for this patch is: should we think of the prep\n> > phase as specifically pruning, or is it general prep and pruning is\n> > the first thing for which we're going to use it? If it's really a\n> > pre-pruning phase, we could name it that way instead of calling it\n> > \"prep\". If it's really a general prep phase, then why does\n> > PlanPrepOutput contain initially_valid_subnodes as a field? One could\n> > imagine letting each prep function decide what kind of prep node it\n> > would like to return, with partition pruning being just one of the\n> > options. But is that a useful generalization of the basic concept, or\n> > just pretending that a special-purpose mechanism is more general than\n> > it really is?\n>\n> While it can feel like the latter TBH, I'm inclined to keep\n> ExecutorPrep generalized. What bothers me about about the\n> alternative of calling the new phase something less generalized like\n> ExecutorDoInitPruning() is that that makes the somewhat elaborate API\n> changes needed for the phase's output to put into QueryDesc, through\n> which it ultimately reaches the main executor, seem less worthwhile.\n>\n> I agree that PlanPrepOutput design needs to be likewise generalized,\n> maybe like you suggest -- using PlanInitPruningOutput, a child class\n> of PlanPrepOutput, to return the prep output for plan nodes that\n> support pruning.\n>\n> Thoughts?\n\nSo I decided to agree with you after all about limiting the scope of\nthis new executor interface, or IOW call it what it is.\n\nI have named it ExecutorGetLockRels() to go with the only use case we\nknow for it -- get the set of relations for AcquireExecutorLocks() to\nlock to validate a plan tree. Its result returned in a node named\nExecLockRelsInfo, which contains the set of relations scanned in the\nplan tree (lockrels) and a list of PlanInitPruningOutput nodes for all\nnodes that undergo pruning.\n\n> > + return CreateQueryDesc(pstmt, NULL, /* XXX pass ExecPrepOutput too? */\n> >\n> > It seems to me that we should do what the XXX suggests. It doesn't\n> > seem nice if the parallel workers could theoretically decide to prune\n> > a different set of nodes than the leader.\n>\n> OK, will fix.\n\nDone. This required adding nodeToString() and stringToNode() support\nfor the nodes produced by the new executor function that wasn't there\nbefore.\n\n> > Somewhere, we're going to need to document the idea that this may\n> > permit us to execute a plan that isn't actually fully valid, but that\n> > we expect to survive because we'll never do anything with the parts of\n> > it that aren't. Maybe that should be added to the executor README, or\n> > maybe there's some better place, but I don't think that should remain\n> > something that's just implicit.\n>\n> Agreed. I'd added a description of the new prep phase to executor\n> README, though the text didn't mention this particular bit. Will fix\n> to mention it.\n\nRewrote the comments above ExecutorGetLockRels() (previously\nExecutorPrep()) and the executor README text to be explicit about the\nfact that not locking some relations effectively invalidates pruned\nparts of the plan tree.\n\n> > This is not a full review, just some initial thoughts looking through this.\n>\n> Thanks again. Will post a new version soon after a bit more polishing.\n\nAttached is v5, now broken into 3 patches:\n\n0001: Some refactoring of runtime pruning code\n0002: Add a plan_tree_walker\n0003: Teach AcquireExecutorLocks to skip locking pruned relations\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Mar 2022 23:35:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Mar 11, 2022 at 11:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached is v5, now broken into 3 patches:\n>\n> 0001: Some refactoring of runtime pruning code\n> 0002: Add a plan_tree_walker\n> 0003: Teach AcquireExecutorLocks to skip locking pruned relations\n\nRepeated the performance tests described in the 1st email of this thread:\n\nHEAD: (copied from the 1st email)\n\n32 tps = 20561.776403 (without initial connection time)\n64 tps = 12553.131423 (without initial connection time)\n128 tps = 13330.365696 (without initial connection time)\n256 tps = 8605.723120 (without initial connection time)\n512 tps = 4435.951139 (without initial connection time)\n1024 tps = 2346.902973 (without initial connection time)\n2048 tps = 1334.680971 (without initial connection time)\n\nPatched v1: (copied from the 1st email)\n\n32 tps = 27554.156077 (without initial connection time)\n64 tps = 27531.161310 (without initial connection time)\n128 tps = 27138.305677 (without initial connection time)\n256 tps = 25825.467724 (without initial connection time)\n512 tps = 19864.386305 (without initial connection time)\n1024 tps = 18742.668944 (without initial connection time)\n2048 tps = 16312.412704 (without initial connection time)\n\nPatched v5:\n\n32 tps = 28204.197738 (without initial connection time)\n64 tps = 26795.385318 (without initial connection time)\n128 tps = 26387.920550 (without initial connection time)\n256 tps = 25601.141556 (without initial connection time)\n512 tps = 19911.947502 (without initial connection time)\n1024 tps = 20158.692952 (without initial connection time)\n2048 tps = 16180.195463 (without initial connection time)\n\nGood to see that these rewrites haven't really hurt the numbers much,\nwhich makes sense because the rewrites have really been about putting\nthe code in the right place.\n\nBTW, these are the numbers for the same benchmark repeated with\nplan_cache_mode = auto, which causes a custom plan to be chosen for\nevery execution and so unaffected by this patch.\n\n32 tps = 13359.225082 (without initial connection time)\n64 tps = 15760.533280 (without initial connection time)\n128 tps = 15825.734482 (without initial connection time)\n256 tps = 15017.693905 (without initial connection time)\n512 tps = 13479.973395 (without initial connection time)\n1024 tps = 13200.444397 (without initial connection time)\n2048 tps = 12884.645475 (without initial connection time)\n\nComparing them to numbers when using force_generic_plan shows that\nmaking the generic plans faster is indeed worthwhile.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Mar 2022 00:06:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Mar 11, 2022 at 9:35 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached is v5, now broken into 3 patches:\n>\n> 0001: Some refactoring of runtime pruning code\n> 0002: Add a plan_tree_walker\n> 0003: Teach AcquireExecutorLocks to skip locking pruned relations\n\nSo is any other committer planning to look at this? Tom, perhaps?\nDavid? This strikes me as important work, and I don't mind going\nthrough and trying to do some detailed review, but (A) I am not the\nperson most familiar with the code being modified here and (B) there\nare some important theoretical questions about the approach that we\nmight want to try to cover before we get down into the details.\n\nIn my opinion, the most important theoretical issue here is around\nreuse of plans that are no longer entirely valid, but the parts that\nare no longer valid are certain to be pruned. If, because we know that\nsome parameter has some particular value, we skip locking a bunch of\npartitions, then when we're executing the plan, those partitions need\nnot exist any more -- or they could have different indexes, be\ndetached from the partitioning hierarchy and subsequently altered,\nwhatever. That seems fine to me provided that all of our code (and any\nthird-party code) is careful not to rely on the portion of the plan\nthat we've pruned away, and doesn't assume that (for example) we can\nstill fetch the name of an index whose OID appears in there someplace.\nI cannot think of a hazard where the fact that the part of a plan is\nno longer valid because some DDL has been executed \"infects\" the\nremainder of the plan. As long as we lock the partitioned tables named\nin the plan and their descendents down to the level just above the one\nat which something is pruned, and are careful, I think we should be\nOK. It would be nice to know if someone has a fundamentally different\nview of the hazards here, though.\n\nJust to state my position here clearly, I would be more than happy if\nsomebody else plans to pick this up and try to get some or all of it\ncommitted, and will cheerfully defer to such person in the event that\nthey have that plan. If, however, no such person exists, I may try my\nhand at that myself.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 14:42:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In my opinion, the most important theoretical issue here is around\n> reuse of plans that are no longer entirely valid, but the parts that\n> are no longer valid are certain to be pruned. If, because we know that\n> some parameter has some particular value, we skip locking a bunch of\n> partitions, then when we're executing the plan, those partitions need\n> not exist any more -- or they could have different indexes, be\n> detached from the partitioning hierarchy and subsequently altered,\n> whatever.\n\nCheck.\n\n> That seems fine to me provided that all of our code (and any\n> third-party code) is careful not to rely on the portion of the plan\n> that we've pruned away, and doesn't assume that (for example) we can\n> still fetch the name of an index whose OID appears in there someplace.\n\n... like EXPLAIN, for example?\n\nIf \"pruning\" means physical removal from the plan tree, then it's\nprobably all right. However, it looks to me like that doesn't\nactually happen, or at least doesn't happen till much later, so\nthere's room for worry about a disconnect between what plancache.c\nhas verified and what executor startup will try to touch. As you\nsay, in the absence of any bugs, that's not a problem ... but if\nthere are such bugs, tracking them down would be really hard.\n\nWhat I am skeptical about is that this work actually accomplishes\nanything under real-world conditions. That's because if pruning would\nsave enough to make skipping the lock-acquisition phase worth the\ntrouble, the plan cache is almost certainly going to decide it should\nbe using a custom plan not a generic plan. Now if we had a better\ncost model (or, indeed, any model at all) for run-time pruning effects\nthen maybe that situation could be improved. I think we'd be better\nserved to worry about that end of it before we spend more time making\nthe executor even less predictable.\n\nAlso, while I've not spent much time at all reading this patch,\nit seems rather desperately undercommented, and a lot of the\nnew names are unintelligible. In particular, I suspect that the\npatch is significantly redesigning when/where run-time pruning\nhappens (unless it's just letting that be run twice); but I don't\nsee any documentation or name changes suggesting where that\nresponsibility is now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Mar 2022 15:38:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Mar 14, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ... like EXPLAIN, for example?\n\nExactly! I think that's the foremost example, but extension modules\nlike auto_explain or even third-party extensions are also a risk. I\nthink there was some discussion of this previously.\n\n> If \"pruning\" means physical removal from the plan tree, then it's\n> probably all right. However, it looks to me like that doesn't\n> actually happen, or at least doesn't happen till much later, so\n> there's room for worry about a disconnect between what plancache.c\n> has verified and what executor startup will try to touch. As you\n> say, in the absence of any bugs, that's not a problem ... but if\n> there are such bugs, tracking them down would be really hard.\n\nSurgery on the plan would violate the general principle that plans are\nread only once constructed. I think the idea ought to be to pass a\nsecondary data structure around with the plan that defines which parts\nyou must ignore. Any code that fails to use that other data structure\nin the appropriate manner gets defined to be buggy and has to be fixed\nby making it follow the new rules.\n\n> What I am skeptical about is that this work actually accomplishes\n> anything under real-world conditions. That's because if pruning would\n> save enough to make skipping the lock-acquisition phase worth the\n> trouble, the plan cache is almost certainly going to decide it should\n> be using a custom plan not a generic plan. Now if we had a better\n> cost model (or, indeed, any model at all) for run-time pruning effects\n> then maybe that situation could be improved. I think we'd be better\n> served to worry about that end of it before we spend more time making\n> the executor even less predictable.\n\nI don't agree with that analysis, because setting plan_cache_mode is\nnot uncommon. Even if that GUC didn't exist, I'm pretty sure there are\ncases where the planner naturally falls into a generic plan anyway,\neven though pruning is happening. But as it is, the GUC does exist,\nand people use it. Consequently, while I'd love to see something done\nabout the costing side of things, I do not accept that all other\nimprovements should wait for that to happen.\n\n> Also, while I've not spent much time at all reading this patch,\n> it seems rather desperately undercommented, and a lot of the\n> new names are unintelligible. In particular, I suspect that the\n> patch is significantly redesigning when/where run-time pruning\n> happens (unless it's just letting that be run twice); but I don't\n> see any documentation or name changes suggesting where that\n> responsibility is now.\n\nI am sympathetic to that concern. I spent a while staring at a\nbaffling comment in 0001 only to discover it had just been moved from\nelsewhere. I really don't feel that things in this are as clear as\nthey could be -- although I hasten to add that I respect the people\nwho have done work in this area previously and am grateful for what\nthey did. It's been a huge benefit to the project in spite of the\nbumps in the road. Moreover, this isn't the only code in PostgreSQL\nthat needs improvement, or the worst. That said, I do think there are\nproblems. I don't yet have a position on whether this patch is making\nthat better or worse.\n\nThat said, I believe that the core idea of the patch is to optionally\nperform pruning before we acquire locks or spin up the main executor\nand then remember the decisions we made. If once the main executor is\nspun up we already made those decisions, then we must stick with what\nwe decided. If not, we make those pruning decisions at the same point\nwe do currently - more or less on demand, at the point when we'd need\nto know whether to descend that branch of the plan tree or not. I\nthink this scheme comes about because there are a couple of different\ninterfaces to the parameterized query stuff, and in some code paths we\nhave the values early enough to use them for pre-pruning, and in\nothers we don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 16:06:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Mar 15, 2022 at 5:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 14, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What I am skeptical about is that this work actually accomplishes\n> > anything under real-world conditions. That's because if pruning would\n> > save enough to make skipping the lock-acquisition phase worth the\n> > trouble, the plan cache is almost certainly going to decide it should\n> > be using a custom plan not a generic plan. Now if we had a better\n> > cost model (or, indeed, any model at all) for run-time pruning effects\n> > then maybe that situation could be improved. I think we'd be better\n> > served to worry about that end of it before we spend more time making\n> > the executor even less predictable.\n>\n> I don't agree with that analysis, because setting plan_cache_mode is\n> not uncommon. Even if that GUC didn't exist, I'm pretty sure there are\n> cases where the planner naturally falls into a generic plan anyway,\n> even though pruning is happening. But as it is, the GUC does exist,\n> and people use it. Consequently, while I'd love to see something done\n> about the costing side of things, I do not accept that all other\n> improvements should wait for that to happen.\n\nI agree that making generic plans execute faster has merit even before\nwe make the costing changes to allow plancache.c prefer generic plans\nover custom ones in these cases. As the numbers in my previous email\nshow, simply executing a generic plan with the proposed improvements\napplied is significantly cheaper than having the planner do the\npruning on every execution:\n\nnparts auto/custom generic\n====== ========== ======\n32 13359 28204\n64 15760 26795\n128 15825 26387\n256 15017 25601\n512 13479 19911\n1024 13200 20158\n2048 12884 16180\n\n> > Also, while I've not spent much time at all reading this patch,\n> > it seems rather desperately undercommented, and a lot of the\n> > new names are unintelligible. In particular, I suspect that the\n> > patch is significantly redesigning when/where run-time pruning\n> > happens (unless it's just letting that be run twice); but I don't\n> > see any documentation or name changes suggesting where that\n> > responsibility is now.\n>\n> I am sympathetic to that concern. I spent a while staring at a\n> baffling comment in 0001 only to discover it had just been moved from\n> elsewhere. I really don't feel that things in this are as clear as\n> they could be -- although I hasten to add that I respect the people\n> who have done work in this area previously and am grateful for what\n> they did. It's been a huge benefit to the project in spite of the\n> bumps in the road. Moreover, this isn't the only code in PostgreSQL\n> that needs improvement, or the worst. That said, I do think there are\n> problems. I don't yet have a position on whether this patch is making\n> that better or worse.\n\nOkay, I'd like to post a new version with the comments edited to make\nthem a bit more intelligible. I understand that the comments around\nthe new invocation mode(s) of runtime pruning are not as clear as they\nshould be, especially as the changes that this patch wants to make to\nhow things work are not very localized.\n\n> That said, I believe that the core idea of the patch is to optionally\n> perform pruning before we acquire locks or spin up the main executor\n> and then remember the decisions we made. If once the main executor is\n> spun up we already made those decisions, then we must stick with what\n> we decided. If not, we make those pruning decisions at the same point\n> we do currently\n\nRight. The \"initial\" pruning, that this patch wants to make occur at\nan earlier point (plancache.c), is currently performed in\nExecInit[Merge]Append().\n\nIf it does occur early due to the plan being a cached one,\nExecInit[Merge]Append() simply refers to its result that would be made\navailable via a new data structure that plancache.c has been made to\npass down to the executor alongside the plan tree.\n\nIf it does not, ExecInit[Merge]Append() does the pruning in the same\nway it does now. Such cases include initial pruning using only STABLE\nexpressions that the planner doesn't bother to compute by itself lest\nthe resulting plan may be cached, but no EXTERN parameters.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Mar 2022 15:19:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Mar 15, 2022 at 3:19 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Mar 15, 2022 at 5:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Mar 14, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Also, while I've not spent much time at all reading this patch,\n> > > it seems rather desperately undercommented, and a lot of the\n> > > new names are unintelligible. In particular, I suspect that the\n> > > patch is significantly redesigning when/where run-time pruning\n> > > happens (unless it's just letting that be run twice); but I don't\n> > > see any documentation or name changes suggesting where that\n> > > responsibility is now.\n> >\n> > I am sympathetic to that concern. I spent a while staring at a\n> > baffling comment in 0001 only to discover it had just been moved from\n> > elsewhere. I really don't feel that things in this are as clear as\n> > they could be -- although I hasten to add that I respect the people\n> > who have done work in this area previously and am grateful for what\n> > they did. It's been a huge benefit to the project in spite of the\n> > bumps in the road. Moreover, this isn't the only code in PostgreSQL\n> > that needs improvement, or the worst. That said, I do think there are\n> > problems. I don't yet have a position on whether this patch is making\n> > that better or worse.\n>\n> Okay, I'd like to post a new version with the comments edited to make\n> them a bit more intelligible. I understand that the comments around\n> the new invocation mode(s) of runtime pruning are not as clear as they\n> should be, especially as the changes that this patch wants to make to\n> how things work are not very localized.\n\nActually, another area where the comments may not be as clear as they\nshould have been is the changes that the patch makes to the\nAcquireExecutorLocks() logic that decides which relations are locked\nto safeguard the plan tree for execution, which are those given by\nRTE_RELATION entries in the range table.\n\nWithout the patch, they are found by actually scanning the range table.\n\nWith the patch, it's the same set of RTEs if the plan doesn't contain\nany pruning nodes, though instead of the range table, what is scanned\nis a bitmapset of their RT indexes that is made available by the\nplanner in the form of PlannedStmt.lockrels. When the plan does\ncontain a pruning node (PlannedStmt.containsInitialPruning), the\nbitmapset is constructed by calling ExecutorGetLockRels() on the plan\ntree, which walks it to add RT indexes of relations mentioned in the\nScan nodes, while skipping any nodes that are pruned after performing\ninitial pruning steps that may be present in their containing parent\nnode's PartitionPruneInfo. Also, the RT indexes of partitioned tables\nthat are present in the PartitionPruneInfo itself are also added to\nthe set.\n\nWhile expanding comments added by the patch to make this clear, I\nrealized that there are two problems, one of them quite glaring:\n\n* Planner's constructing this bitmapset and its copying along with the\nPlannedStmt is pure overhead in the cases that this patch has nothing\nto do with, which is the kind of thing that Andres cautioned against\nupthread.\n\n* Not all partitioned tables that would have been locked without the\npatch to come up with a Append/MergeAppend plan may be returned by\nExecutorGetLockRels(). For example, if none of the query's\nruntime-prunable quals were found to match the partition key of an\nintermediate partitioned table and thus that partitioned table not\nincluded in the PartitionPruneInfo. Or if an Append/MergeAppend\ncovering a partition tree doesn't contain any PartitionPruneInfo to\nbegin with, in which case, only the leaf partitions and none of\npartitioned parents would be accounted for by the\nExecutorGetLockRels() logic.\n\nThe 1st one seems easy to fix by not inventing PlannedStmt.lockrels\nand just doing what's being done now: scan the range table if\n(!PlannedStmt.containsInitialPruning).\n\nThe only way perhaps to fix the second one is to reconsider the\ndecision we made in the following commit:\n\n commit 52ed730d511b7b1147f2851a7295ef1fb5273776\n Author: Tom Lane <tgl@sss.pgh.pa.us>\n Date: Sun Oct 7 14:33:17 2018 -0400\n\n Remove some unnecessary fields from Plan trees.\n\n In the wake of commit f2343653f, we no longer need some fields that\n were used before to control executor lock acquisitions:\n\n * PlannedStmt.nonleafResultRelations can go away entirely.\n\n * partitioned_rels can go away from Append, MergeAppend, and ModifyTable.\n However, ModifyTable still needs to know the RT index of the partition\n root table if any, which was formerly kept in the first entry of that\n list. Add a new field \"rootRelation\" to remember that. rootRelation is\n partly redundant with nominalRelation, in that if it's set it will have\n the same value as nominalRelation. However, the latter field has a\n different purpose so it seems best to keep them distinct.\n\nThat is, add back the partitioned_rels field, at least to Append and\nMergeAppend, to store the RT indexes of partitioned tables whose\nchildren's paths are present in Append/MergeAppend.subpaths.\n\nThoughts?\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Mar 2022 21:44:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Mar 22, 2022 at 9:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Mar 15, 2022 at 3:19 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Mar 15, 2022 at 5:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Mon, Mar 14, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > Also, while I've not spent much time at all reading this patch,\n> > > > it seems rather desperately undercommented, and a lot of the\n> > > > new names are unintelligible. In particular, I suspect that the\n> > > > patch is significantly redesigning when/where run-time pruning\n> > > > happens (unless it's just letting that be run twice); but I don't\n> > > > see any documentation or name changes suggesting where that\n> > > > responsibility is now.\n> > >\n> > > I am sympathetic to that concern. I spent a while staring at a\n> > > baffling comment in 0001 only to discover it had just been moved from\n> > > elsewhere. I really don't feel that things in this are as clear as\n> > > they could be -- although I hasten to add that I respect the people\n> > > who have done work in this area previously and am grateful for what\n> > > they did. It's been a huge benefit to the project in spite of the\n> > > bumps in the road. Moreover, this isn't the only code in PostgreSQL\n> > > that needs improvement, or the worst. That said, I do think there are\n> > > problems. I don't yet have a position on whether this patch is making\n> > > that better or worse.\n> >\n> > Okay, I'd like to post a new version with the comments edited to make\n> > them a bit more intelligible. I understand that the comments around\n> > the new invocation mode(s) of runtime pruning are not as clear as they\n> > should be, especially as the changes that this patch wants to make to\n> > how things work are not very localized.\n>\n> Actually, another area where the comments may not be as clear as they\n> should have been is the changes that the patch makes to the\n> AcquireExecutorLocks() logic that decides which relations are locked\n> to safeguard the plan tree for execution, which are those given by\n> RTE_RELATION entries in the range table.\n>\n> Without the patch, they are found by actually scanning the range table.\n>\n> With the patch, it's the same set of RTEs if the plan doesn't contain\n> any pruning nodes, though instead of the range table, what is scanned\n> is a bitmapset of their RT indexes that is made available by the\n> planner in the form of PlannedStmt.lockrels. When the plan does\n> contain a pruning node (PlannedStmt.containsInitialPruning), the\n> bitmapset is constructed by calling ExecutorGetLockRels() on the plan\n> tree, which walks it to add RT indexes of relations mentioned in the\n> Scan nodes, while skipping any nodes that are pruned after performing\n> initial pruning steps that may be present in their containing parent\n> node's PartitionPruneInfo. Also, the RT indexes of partitioned tables\n> that are present in the PartitionPruneInfo itself are also added to\n> the set.\n>\n> While expanding comments added by the patch to make this clear, I\n> realized that there are two problems, one of them quite glaring:\n>\n> * Planner's constructing this bitmapset and its copying along with the\n> PlannedStmt is pure overhead in the cases that this patch has nothing\n> to do with, which is the kind of thing that Andres cautioned against\n> upthread.\n>\n> * Not all partitioned tables that would have been locked without the\n> patch to come up with a Append/MergeAppend plan may be returned by\n> ExecutorGetLockRels(). For example, if none of the query's\n> runtime-prunable quals were found to match the partition key of an\n> intermediate partitioned table and thus that partitioned table not\n> included in the PartitionPruneInfo. Or if an Append/MergeAppend\n> covering a partition tree doesn't contain any PartitionPruneInfo to\n> begin with, in which case, only the leaf partitions and none of\n> partitioned parents would be accounted for by the\n> ExecutorGetLockRels() logic.\n>\n> The 1st one seems easy to fix by not inventing PlannedStmt.lockrels\n> and just doing what's being done now: scan the range table if\n> (!PlannedStmt.containsInitialPruning).\n\nThe attached updated patch does it like this.\n\n> The only way perhaps to fix the second one is to reconsider the\n> decision we made in the following commit:\n>\n> commit 52ed730d511b7b1147f2851a7295ef1fb5273776\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Sun Oct 7 14:33:17 2018 -0400\n>\n> Remove some unnecessary fields from Plan trees.\n>\n> In the wake of commit f2343653f, we no longer need some fields that\n> were used before to control executor lock acquisitions:\n>\n> * PlannedStmt.nonleafResultRelations can go away entirely.\n>\n> * partitioned_rels can go away from Append, MergeAppend, and ModifyTable.\n> However, ModifyTable still needs to know the RT index of the partition\n> root table if any, which was formerly kept in the first entry of that\n> list. Add a new field \"rootRelation\" to remember that. rootRelation is\n> partly redundant with nominalRelation, in that if it's set it will have\n> the same value as nominalRelation. However, the latter field has a\n> different purpose so it seems best to keep them distinct.\n>\n> That is, add back the partitioned_rels field, at least to Append and\n> MergeAppend, to store the RT indexes of partitioned tables whose\n> children's paths are present in Append/MergeAppend.subpaths.\n\nAnd implemented this in the attached 0002 that reintroduces\npartitioned_rels in Append/MergeAppend nodes as a bitmapset of RT\nindexes. The set contains the RT indexes of partitioned ancestors\nwhose expansion produced the leaf partitions that a given\nAppend/MergeAppend node scans. This project needs this way of\nknowing the partitioned tables involved in producing an\nAppend/MergeAppend node, because we'd like to give plancache.c the\nability to glean the set of relations to be locked by scanning a plan\ntree to make the tree ready for execution rather than by scanning the\nrange table and the only relations we're missing in the tree right now\nare partitioned tables.\n\nOne fly-in-the-ointment situation I faced when doing that is the fact\nthat setrefs.c in most situations removes the Append/MergeAppend from\nthe final plan if it contains only one child subplan. I got around it\nby inventing a PlannerGlobal/PlannedStmt.elidedAppendPartedRels set\nwhich is a union of partitioned_rels of all the Append/MergeAppend\nnodes in the plan tree that were removed as described.\n\nOther than the changes mentioned above, the updated patch now contains\na bit more commentary than earlier versions, mostly around\nAcquireExecutorLocks()'s new way of determining the set of relations\nto lock and the significantly redesigned working of the \"initial\"\nexecution pruning.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 16:17:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Mar 28, 2022 at 4:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Other than the changes mentioned above, the updated patch now contains\n> a bit more commentary than earlier versions, mostly around\n> AcquireExecutorLocks()'s new way of determining the set of relations\n> to lock and the significantly redesigned working of the \"initial\"\n> execution pruning.\n\nForgot to rebase over the latest HEAD, so here's v7. Also fixed that\n_out and _read functions for PlanInitPruningOutput were using an\nobsolete node label.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Mar 2022 16:28:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Mar 28, 2022 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Mar 28, 2022 at 4:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Other than the changes mentioned above, the updated patch now contains\n> > a bit more commentary than earlier versions, mostly around\n> > AcquireExecutorLocks()'s new way of determining the set of relations\n> > to lock and the significantly redesigned working of the \"initial\"\n> > execution pruning.\n>\n> Forgot to rebase over the latest HEAD, so here's v7. Also fixed that\n> _out and _read functions for PlanInitPruningOutput were using an\n> obsolete node label.\n\nRebased.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 31 Mar 2022 12:25:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "I'm looking at 0001 here with intention to commit later. I see that\nthere is some resistance to 0004, but I think a final verdict on that\none doesn't materially affect 0001.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:56:09 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Mar 31, 2022 at 6:55 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I'm looking at 0001 here with intention to commit later. I see that\n> there is some resistance to 0004, but I think a final verdict on that\n> one doesn't materially affect 0001.\n\nThanks.\n\nWhile the main goal of the refactoring patch is to make it easier to\nreview the more complex changes that 0004 makes to execPartition.c, I\nagree it has merit on its own. Although, one may say that the bit\nabout providing a PlanState-independent ExprContext is more closely\ntied with 0004's requirements...\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:11:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, 31 Mar 2022 at 16:25, Amit Langote <amitlangote09@gmail.com> wrote:\n> Rebased.\n\nI've been looking over the v8 patch and I'd like to propose semi-baked\nideas to improve things. I'd need to go and write them myself to\nfully know if they'd actually work ok.\n\n1. You've changed the signature of various functions by adding\nExecLockRelsInfo *execlockrelsinfo. I'm wondering why you didn't just\nput the ExecLockRelsInfo as a new field in PlannedStmt?\n\nI think the above gets around messing the signatures of\nCreateQueryDesc(), ExplainOnePlan(), pg_plan_queries(),\nPortalDefineQuery(), ProcessQuery() It would get rid of your change of\nforeach to forboth in execute_sql_string() / PortalRunMulti() and gets\nrid of a number of places where your carrying around a variable named\nexeclockrelsinfo_list. It would also make the patch significantly\neasier to review as you'd be touching far fewer files.\n\n2. I don't really like the way you've gone about most of the patch...\n\nThe way I imagine this working is that during create_plan() we visit\nall nodes that have run-time pruning then inside create_append_plan()\nand create_merge_append_plan() we'd tag those onto a new field in\nPlannerGlobal That way you can store the PartitionPruneInfos in the\nnew PlannedStmt field in standard_planner() after the\nmakeNode(PlannedStmt).\n\nInstead of storing the PartitionPruneInfo in the Append / MergeAppend\nstruct, you'd just add a new index field to those structs. The index\nwould start with 0 for the 0th PartitionPruneInfo. You'd basically\njust know the index by assigning\nlist_length(root->glob->partitionpruneinfos).\n\nYou'd then assign the root->glob->partitionpruneinfos to\nPlannedStmt.partitionpruneinfos and anytime you needed to do run-time\npruning during execution, you'd need to use the Append / MergeAppend's\npartition_prune_info_idx to lookup the PartitionPruneInfo in some new\nfield you add to EState to store those. You'd leave that index as -1\nif there's no PartitionPruneInfo for the Append / MergeAppend node.\n\nWhen you do AcquireExecutorLocks(), you'd iterate over the\nPlannedStmt's PartitionPruneInfo to figure out which subplans to\nprune. You'd then have an array sized\nlist_length(plannedstmt->runtimepruneinfos) where you'd store the\nresult. When the Append/MergeAppend node starts up you just check if\nthe part_prune_info_idx >= 0 and if there's a non-NULL result stored\nthen use that result. That's how you'd ensure you always got the same\nrun-time prune result between locking and plan startup.\n\n3. Also, looking at ExecGetLockRels(), shouldn't it be the planner's\njob to determine the minimum set of relations which must be locked? I\nthink the plan tree traversal during execution not great. Seems the\nwhole point of this patch is to reduce overhead during execution. A\nfull additional plan traversal aside from the 3 that we already do for\nstart/run/end of execution seems not great.\n\nI think this means that during AcquireExecutorLocks() you'd start with\nthe minimum set or RTEs that need to be locked as determined during\ncreate_plan() and stored in some Bitmapset field in PlannedStmt. This\nminimal set would also only exclude RTIs that would only possibly be\nused due to a PartitionPruneInfo with initial pruning steps, i.e.\ninclude RTIs from PartitionPruneInfo with no init pruining steps (you\ncan't skip any locks for those). All you need to do to determine the\nRTEs to lock are to take the minimal set and execute each\nPartitionPruneInfo in the PlannedStmt that has init steps\n\n4. It's a bit disappointing to see RelOptInfo.partitioned_rels getting\nrevived here. Why don't you just add a partitioned_relids to\nPartitionPruneInfo and just have make_partitionedrel_pruneinfo build\nyou a Relids of them. PartitionedRelPruneInfo already has an rtindex\nfield, so you just need to bms_add_member whatever that rtindex is.\n\nIt's a fairly high-level review at this stage. I can look in more\ndetail if the above points get looked at. You may find or know of\nsome reason why it can't be done like I mention above.\n\nDavid\n\n\n", "msg_date": "Fri, 1 Apr 2022 14:31:54 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Thanks a lot for looking into this.\n\nOn Fri, Apr 1, 2022 at 10:32 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've been looking over the v8 patch and I'd like to propose semi-baked\n> ideas to improve things. I'd need to go and write them myself to\n> fully know if they'd actually work ok.\n>\n> 1. You've changed the signature of various functions by adding\n> ExecLockRelsInfo *execlockrelsinfo. I'm wondering why you didn't just\n> put the ExecLockRelsInfo as a new field in PlannedStmt?\n>\n> I think the above gets around messing the signatures of\n> CreateQueryDesc(), ExplainOnePlan(), pg_plan_queries(),\n> PortalDefineQuery(), ProcessQuery() It would get rid of your change of\n> foreach to forboth in execute_sql_string() / PortalRunMulti() and gets\n> rid of a number of places where your carrying around a variable named\n> execlockrelsinfo_list. It would also make the patch significantly\n> easier to review as you'd be touching far fewer files.\n\nI'm worried about that churn myself and did consider this idea, though\nI couldn't shake the feeling that it's maybe wrong to put something in\nPlannedStmt that the planner itself doesn't produce. I mean the\ndefinition of PlannedStmt says this:\n\n/* ----------------\n * PlannedStmt node\n *\n * The output of the planner\n\nWith the ideas that you've outlined below, perhaps we can frame most\nof the things that the patch wants to do as the planner and the\nplancache changes. If we twist the above definition a bit to say what\nthe plancache does in this regard is part of planning, maybe it makes\nsense to add the initial pruning related fields (nodes, outputs) into\nPlannedStmt.\n\n> 2. I don't really like the way you've gone about most of the patch...\n>\n> The way I imagine this working is that during create_plan() we visit\n> all nodes that have run-time pruning then inside create_append_plan()\n> and create_merge_append_plan() we'd tag those onto a new field in\n> PlannerGlobal That way you can store the PartitionPruneInfos in the\n> new PlannedStmt field in standard_planner() after the\n> makeNode(PlannedStmt).\n>\n> Instead of storing the PartitionPruneInfo in the Append / MergeAppend\n> struct, you'd just add a new index field to those structs. The index\n> would start with 0 for the 0th PartitionPruneInfo. You'd basically\n> just know the index by assigning\n> list_length(root->glob->partitionpruneinfos).\n>\n> You'd then assign the root->glob->partitionpruneinfos to\n> PlannedStmt.partitionpruneinfos and anytime you needed to do run-time\n> pruning during execution, you'd need to use the Append / MergeAppend's\n> partition_prune_info_idx to lookup the PartitionPruneInfo in some new\n> field you add to EState to store those. You'd leave that index as -1\n> if there's no PartitionPruneInfo for the Append / MergeAppend node.\n>\n> When you do AcquireExecutorLocks(), you'd iterate over the\n> PlannedStmt's PartitionPruneInfo to figure out which subplans to\n> prune. You'd then have an array sized\n> list_length(plannedstmt->runtimepruneinfos) where you'd store the\n> result. When the Append/MergeAppend node starts up you just check if\n> the part_prune_info_idx >= 0 and if there's a non-NULL result stored\n> then use that result. That's how you'd ensure you always got the same\n> run-time prune result between locking and plan startup.\n\nActually, Robert too suggested such an idea to me off-list and I think\nit's worth trying. I was not sure about the implementation, because\nthen we'd be passing around lists of initial pruning nodes/results\nacross many function/module boundaries that you mentioned in your\ncomment 1, but if we agree that PlannedStmt is an acceptable place for\nthose things to be stored, then I agree it's an attractive idea.\n\n> 3. Also, looking at ExecGetLockRels(), shouldn't it be the planner's\n> job to determine the minimum set of relations which must be locked? I\n> think the plan tree traversal during execution not great. Seems the\n> whole point of this patch is to reduce overhead during execution. A\n> full additional plan traversal aside from the 3 that we already do for\n> start/run/end of execution seems not great.\n>\n> I think this means that during AcquireExecutorLocks() you'd start with\n> the minimum set or RTEs that need to be locked as determined during\n> create_plan() and stored in some Bitmapset field in PlannedStmt.\n\nThe patch did have a PlannedStmt.lockrels till v6. Though, it wasn't\nthe same thing as you are describing it...\n\n> This\n> minimal set would also only exclude RTIs that would only possibly be\n> used due to a PartitionPruneInfo with initial pruning steps, i.e.\n> include RTIs from PartitionPruneInfo with no init pruining steps (you\n> can't skip any locks for those). All you need to do to determine the\n> RTEs to lock are to take the minimal set and execute each\n> PartitionPruneInfo in the PlannedStmt that has init steps\n\nSo just thinking about an Append/MergeAppend, the minimum set must\ninclude the RT indexes of all the partitioned tables whose direct and\nindirect children's plans will be in 'subplans' and also of the\nchildren if the PartitionPruneInfo doesn't contain initial steps or if\nthere is no PartitionPruneInfo to begin with.\n\nOne question is whether the planner should always pay the overhead of\ninitializing this bitmapset? I mean it's only worthwhile if\nAcquireExecutorLocks() is going to be involved, that is, the plan will\nbe cached and reused.\n\n> 4. It's a bit disappointing to see RelOptInfo.partitioned_rels getting\n> revived here. Why don't you just add a partitioned_relids to\n> PartitionPruneInfo and just have make_partitionedrel_pruneinfo build\n> you a Relids of them. PartitionedRelPruneInfo already has an rtindex\n> field, so you just need to bms_add_member whatever that rtindex is.\n\nHmm, not all Append/MergeAppend nodes in the plan tree may have\nmake_partition_pruneinfo() called on them though.\n\nIf not the proposed RelOptInfo.partitioned_rels that is populated in\nthe early planning stages, the only reliable way to get all the\npartitioned tables involved in Appends/MergeAppends at create_plan()\nstage seems to be to make a function out the stanza at the top of\nmake_partition_pruneinfo() that collects them by scanning the leaf\npaths and tracing each path's relation's parents up to the root\npartitioned parent and call it from create_{merge_}append_plan() if\nmake_partition_pruneinfo() was not. I did try to implement that and\nfound it a bit complex and expensive (the scanning the leaf paths\npart).\n\n> It's a fairly high-level review at this stage. I can look in more\n> detail if the above points get looked at. You may find or know of\n> some reason why it can't be done like I mention above.\n\nI'll try to write a version with the above points addressed, while\nkeeping RelOptInfo.partitioned_rels around for now.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqH9-fAvpG-w9qYCcDWzK3vGPCMyw4f9nHzqkxXVuD1pxw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 12:09:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Apr 1, 2022 at 10:32 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> 1. You've changed the signature of various functions by adding\n>> ExecLockRelsInfo *execlockrelsinfo. I'm wondering why you didn't just\n>> put the ExecLockRelsInfo as a new field in PlannedStmt?\n\n> I'm worried about that churn myself and did consider this idea, though\n> I couldn't shake the feeling that it's maybe wrong to put something in\n> PlannedStmt that the planner itself doesn't produce.\n\nPlannedStmt is part of the plan tree, which MUST be read-only to\nthe executor. This is not negotiable. However, there's other\nplaces that this data could be put, such as QueryDesc.\nOr for that matter, couldn't the data structure be created by\nthe planner? (It looks like David is proposing exactly that\nfurther down.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 23:45:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, 1 Apr 2022 at 16:09, Amit Langote <amitlangote09@gmail.com> wrote:\n> definition of PlannedStmt says this:\n>\n> /* ----------------\n> * PlannedStmt node\n> *\n> * The output of the planner\n>\n> With the ideas that you've outlined below, perhaps we can frame most\n> of the things that the patch wants to do as the planner and the\n> plancache changes. If we twist the above definition a bit to say what\n> the plancache does in this regard is part of planning, maybe it makes\n> sense to add the initial pruning related fields (nodes, outputs) into\n> PlannedStmt.\n\nHow about the PartitionPruneInfos go into PlannedStmt as a List\nindexed in the way I mentioned and the cache of the results of pruning\nin EState?\n\nI think that leaves you adding List *partpruneinfos, Bitmapset\n*minimumlockrtis to PlannedStmt and the thing you have to cache the\npruning results into EState. I'm not very clear on where you should\nstash the results of run-time pruning in the meantime before you can\nput them in EState. You might need to invent some intermediate struct\nthat gets passed around that you can scribble down some details you're\ngoing to need during execution.\n\n> One question is whether the planner should always pay the overhead of\n> initializing this bitmapset? I mean it's only worthwhile if\n> AcquireExecutorLocks() is going to be involved, that is, the plan will\n> be cached and reused.\n\nMaybe the Bitmapset for the minimal locks needs to be built with\nbms_add_range(NULL, 0, list_length(rtable)); then do\nbms_del_members() on the relevant RTIs you find in the listed\nPartitionPruneInfos. That way it's very simple and cheap to do when\nthere are no PartitionPruneInfos.\n\n> > 4. It's a bit disappointing to see RelOptInfo.partitioned_rels getting\n> > revived here. Why don't you just add a partitioned_relids to\n> > PartitionPruneInfo and just have make_partitionedrel_pruneinfo build\n> > you a Relids of them. PartitionedRelPruneInfo already has an rtindex\n> > field, so you just need to bms_add_member whatever that rtindex is.\n>\n> Hmm, not all Append/MergeAppend nodes in the plan tree may have\n> make_partition_pruneinfo() called on them though.\n\nFor Append/MergeAppends without run-time pruning you'll want to add\nthe RTIs to the minimal locking set of RTIs to go into PlannedStmt.\nThe only things you want to leave out of that are RTIs for the RTEs\nthat you might run-time prune away during AcquireExecutorLocks().\n\nDavid\n\n\n", "msg_date": "Fri, 1 Apr 2022 17:08:27 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Apr 1, 2022 at 1:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 1 Apr 2022 at 16:09, Amit Langote <amitlangote09@gmail.com> wrote:\n> > definition of PlannedStmt says this:\n> >\n> > /* ----------------\n> > * PlannedStmt node\n> > *\n> > * The output of the planner\n> >\n> > With the ideas that you've outlined below, perhaps we can frame most\n> > of the things that the patch wants to do as the planner and the\n> > plancache changes. If we twist the above definition a bit to say what\n> > the plancache does in this regard is part of planning, maybe it makes\n> > sense to add the initial pruning related fields (nodes, outputs) into\n> > PlannedStmt.\n>\n> How about the PartitionPruneInfos go into PlannedStmt as a List\n> indexed in the way I mentioned and the cache of the results of pruning\n> in EState?\n>\n> I think that leaves you adding List *partpruneinfos, Bitmapset\n> *minimumlockrtis to PlannedStmt and the thing you have to cache the\n> pruning results into EState. I'm not very clear on where you should\n> stash the results of run-time pruning in the meantime before you can\n> put them in EState. You might need to invent some intermediate struct\n> that gets passed around that you can scribble down some details you're\n> going to need during execution.\n\nYes, the ExecLockRelsInfo node in the current patch, that first gets\nadded to the QueryDesc and subsequently to the EState of the query,\nserves as that stashing place. Not sure if you've looked at\nExecLockRelInfo in detail in your review of the patch so far, but it\ncarries the initial pruning result in what are called\nPlanInitPruningOutput nodes, which are stored in a list in\nExecLockRelsInfo and their offsets in the list are in turn stored in\nan adjacent array that contains an element for every plan node in the\ntree. If we go with a PlannedStmt.partpruneinfos list, then maybe we\ndon't need to have that array, because the Append/MergeAppend nodes\nwould be carrying those offsets by themselves.\n\nMaybe a different name for ExecLockRelsInfo would be better?\n\nAlso, given Tom's apparent dislike for carrying that in PlannedStmt,\nmaybe the way I have it now is fine?\n\n> > One question is whether the planner should always pay the overhead of\n> > initializing this bitmapset? I mean it's only worthwhile if\n> > AcquireExecutorLocks() is going to be involved, that is, the plan will\n> > be cached and reused.\n>\n> Maybe the Bitmapset for the minimal locks needs to be built with\n> bms_add_range(NULL, 0, list_length(rtable)); then do\n> bms_del_members() on the relevant RTIs you find in the listed\n> PartitionPruneInfos. That way it's very simple and cheap to do when\n> there are no PartitionPruneInfos.\n\nAh, okay. Looking at make_partition_pruneinfo(), I think I see a way\nto delete the RTIs of prunable relations -- construct a\nall_matched_leaf_part_relids in parallel to allmatchedsubplans and\ndelete those from the initial set.\n\n> > > 4. It's a bit disappointing to see RelOptInfo.partitioned_rels getting\n> > > revived here. Why don't you just add a partitioned_relids to\n> > > PartitionPruneInfo and just have make_partitionedrel_pruneinfo build\n> > > you a Relids of them. PartitionedRelPruneInfo already has an rtindex\n> > > field, so you just need to bms_add_member whatever that rtindex is.\n> >\n> > Hmm, not all Append/MergeAppend nodes in the plan tree may have\n> > make_partition_pruneinfo() called on them though.\n>\n> For Append/MergeAppends without run-time pruning you'll want to add\n> the RTIs to the minimal locking set of RTIs to go into PlannedStmt.\n> The only things you want to leave out of that are RTIs for the RTEs\n> that you might run-time prune away during AcquireExecutorLocks().\n\nYeah, I see it now.\n\nThanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 15:58:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Apr 1, 2022 at 12:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Apr 1, 2022 at 10:32 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >> 1. You've changed the signature of various functions by adding\n> >> ExecLockRelsInfo *execlockrelsinfo. I'm wondering why you didn't just\n> >> put the ExecLockRelsInfo as a new field in PlannedStmt?\n>\n> > I'm worried about that churn myself and did consider this idea, though\n> > I couldn't shake the feeling that it's maybe wrong to put something in\n> > PlannedStmt that the planner itself doesn't produce.\n>\n> PlannedStmt is part of the plan tree, which MUST be read-only to\n> the executor. This is not negotiable. However, there's other\n> places that this data could be put, such as QueryDesc.\n> Or for that matter, couldn't the data structure be created by\n> the planner? (It looks like David is proposing exactly that\n> further down.)\n\nThe data structure in question is for storing the results of\nperforming initial partition pruning on a generic plan, which the\nproposes to do in plancache.c -- inside the body of\nAcquireExecutorLocks()'s loop over PlannedStmts -- so, it's hard to\nsee it as a product of the planner. :-(\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 16:01:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, 1 Apr 2022 at 19:58, Amit Langote <amitlangote09@gmail.com> wrote:\n> Yes, the ExecLockRelsInfo node in the current patch, that first gets\n> added to the QueryDesc and subsequently to the EState of the query,\n> serves as that stashing place. Not sure if you've looked at\n> ExecLockRelInfo in detail in your review of the patch so far, but it\n> carries the initial pruning result in what are called\n> PlanInitPruningOutput nodes, which are stored in a list in\n> ExecLockRelsInfo and their offsets in the list are in turn stored in\n> an adjacent array that contains an element for every plan node in the\n> tree. If we go with a PlannedStmt.partpruneinfos list, then maybe we\n> don't need to have that array, because the Append/MergeAppend nodes\n> would be carrying those offsets by themselves.\n\nI saw it, just not in great detail. I saw that you had an array that\nwas indexed by the plan node's ID. I thought that wouldn't be so good\nwith large complex plans that we often get with partitioning\nworkloads. That's why I mentioned using another index that you store\nin Append/MergeAppend that starts at 0 and increments by 1 for each\nnode that has a PartitionPruneInfo made for it during create_plan.\n\n> Maybe a different name for ExecLockRelsInfo would be better?\n>\n> Also, given Tom's apparent dislike for carrying that in PlannedStmt,\n> maybe the way I have it now is fine?\n\nI think if you change how it's indexed and the other stuff then we can\nhave another look. I think the patch will be much easier to review\nonce the ParitionPruneInfos are moved into PlannedStmt.\n\nDavid\n\n\n", "msg_date": "Fri, 1 Apr 2022 21:19:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Apr 1, 2022 at 5:20 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 1 Apr 2022 at 19:58, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Yes, the ExecLockRelsInfo node in the current patch, that first gets\n> > added to the QueryDesc and subsequently to the EState of the query,\n> > serves as that stashing place. Not sure if you've looked at\n> > ExecLockRelInfo in detail in your review of the patch so far, but it\n> > carries the initial pruning result in what are called\n> > PlanInitPruningOutput nodes, which are stored in a list in\n> > ExecLockRelsInfo and their offsets in the list are in turn stored in\n> > an adjacent array that contains an element for every plan node in the\n> > tree. If we go with a PlannedStmt.partpruneinfos list, then maybe we\n> > don't need to have that array, because the Append/MergeAppend nodes\n> > would be carrying those offsets by themselves.\n>\n> I saw it, just not in great detail. I saw that you had an array that\n> was indexed by the plan node's ID. I thought that wouldn't be so good\n> with large complex plans that we often get with partitioning\n> workloads. That's why I mentioned using another index that you store\n> in Append/MergeAppend that starts at 0 and increments by 1 for each\n> node that has a PartitionPruneInfo made for it during create_plan.\n>\n> > Maybe a different name for ExecLockRelsInfo would be better?\n> >\n> > Also, given Tom's apparent dislike for carrying that in PlannedStmt,\n> > maybe the way I have it now is fine?\n>\n> I think if you change how it's indexed and the other stuff then we can\n> have another look. I think the patch will be much easier to review\n> once the ParitionPruneInfos are moved into PlannedStmt.\n\nWill do, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 17:36:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "I noticed a definitional problem in 0001 that's also a bug in some\nconditions -- namely that the bitmapset \"validplans\" is never explicitly\ninitialized to NIL. In the original coding, the BMS was always returned\nfrom somewhere; in the new code, it is passed from an uninitialized\nstack variable into the new ExecInitPartitionPruning function, which\nthen proceeds to add new members to it without initializing it first.\nIndeed that function's header comment explicitly indicates that it is\nnot initialized:\n\n+ * Initial pruning can be done immediately, so it is done here if needed and\n+ * the set of surviving partition subplans' indexes are added to the output\n+ * parameter *initially_valid_subplans.\n\neven though this is not fully correct, because when prunestate->do_initial_prune\nis false, then the BMS *is* initialized.\n\nI have no opinion on where to initialize it, but it needs to be done\nsomewhere and the comment needs to agree.\n\n\nI think the names ExecCreatePartitionPruneState and\nExecInitPartitionPruning are too confusingly similar. Maybe the former\nshould be renamed to somehow make it clear that it is a subroutine for\nthe former.\n\n\nAt the top of the file, there's a new comment that reads:\n\n * ExecInitPartitionPruning:\n * Creates the PartitionPruneState required by each of the two pruning\n * functions.\n\nWhat are \"the two pruning functions\"? I think here you mean \"Append\"\nand \"MergeAppend\". Maybe spell that out explicitly.\n\n\nI think this comment needs to be reworded:\n\n+ * Subplans would previously be indexed 0..(n_total_subplans - 1) should be\n+ * changed to index range 0..num(initially_valid_subplans).\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 3 Apr 2022 13:21:56 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Thanks for the review.\n\nOn Sun, Apr 3, 2022 at 8:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I noticed a definitional problem in 0001 that's also a bug in some\n> conditions -- namely that the bitmapset \"validplans\" is never explicitly\n> initialized to NIL. In the original coding, the BMS was always returned\n> from somewhere; in the new code, it is passed from an uninitialized\n> stack variable into the new ExecInitPartitionPruning function, which\n> then proceeds to add new members to it without initializing it first.\n\nHmm, the following blocks in ExecInitPartitionPruning() define\n*initially_valid_subplans:\n\n /*\n * Perform an initial partition prune pass, if required.\n */\n if (prunestate->do_initial_prune)\n {\n /* Determine which subplans survive initial pruning */\n *initially_valid_subplans = ExecFindInitialMatchingSubPlans(prunestate);\n }\n else\n {\n /* We'll need to initialize all subplans */\n Assert(n_total_subplans > 0);\n *initially_valid_subplans = bms_add_range(NULL, 0,\n n_total_subplans - 1);\n }\n\nAFAICS, both assign *initially_valid_subplans a value whose\ncomputation is not dependent on reading it first, so I don't see a\nproblem.\n\nAm I missing something?\n\n> Indeed that function's header comment explicitly indicates that it is\n> not initialized:\n>\n> + * Initial pruning can be done immediately, so it is done here if needed and\n> + * the set of surviving partition subplans' indexes are added to the output\n> + * parameter *initially_valid_subplans.\n>\n> even though this is not fully correct, because when prunestate->do_initial_prune\n> is false, then the BMS *is* initialized.\n>\n> I have no opinion on where to initialize it, but it needs to be done\n> somewhere and the comment needs to agree.\n\nI can see that the comment is insufficient, so I've expanded it as follows:\n\n- * Initial pruning can be done immediately, so it is done here if needed and\n- * the set of surviving partition subplans' indexes are added to the output\n- * parameter *initially_valid_subplans.\n+ * On return, *initially_valid_subplans is assigned the set of indexes of\n+ * child subplans that must be initialized along with the parent plan node.\n+ * Initial pruning is performed here if needed and in that case only the\n+ * surviving subplans' indexes are added.\n\n> I think the names ExecCreatePartitionPruneState and\n> ExecInitPartitionPruning are too confusingly similar. Maybe the former\n> should be renamed to somehow make it clear that it is a subroutine for\n> the former.\n\nAh, yes. I've taken out the \"Exec\" from the former.\n\n> At the top of the file, there's a new comment that reads:\n>\n> * ExecInitPartitionPruning:\n> * Creates the PartitionPruneState required by each of the two pruning\n> * functions.\n>\n> What are \"the two pruning functions\"? I think here you mean \"Append\"\n> and \"MergeAppend\". Maybe spell that out explicitly.\n\nActually it meant: ExecFindInitiaMatchingSubPlans() and\nExecFindMatchingSubPlans(). They perform \"initial\" and \"exec\" set of\npruning steps, respectively.\n\nI realized that both functions have identical bodies at this point,\nexcept that they pass 'true' and 'false', respectively, for\ninitial_prune argument of the sub-routine\nfind_matching_subplans_recurse(), which is where the pruning using the\nappropriate set of steps contained in PartitionPruneState\n(initial_pruning_steps or exec_pruning_steps) actually occurs. So,\nI've updated the patch to just retain the latter, adding an\ninitial_prune parameter to it to pass to the aforementioned\nfind_matching_subplans_recurse().\n\nI've also updated the run-time pruning module comment to describe this change:\n\n * ExecFindMatchingSubPlans:\n- * Returns indexes of matching subplans after evaluating all available\n- * expressions, that is, using execution pruning steps. This function can\n- * can only be called during execution and must be called again each time\n- * the value of a Param listed in PartitionPruneState's 'execparamids'\n- * changes.\n+ * Returns indexes of matching subplans after evaluating the expressions\n+ * that are safe to evaluate at a given point. This function is first\n+ * called during ExecInitPartitionPruning() to find the initially\n+ * matching subplans based on performing the initial pruning steps and\n+ * then must be called again each time the value of a Param listed in\n+ * PartitionPruneState's 'execparamids' changes.\n\n> I think this comment needs to be reworded:\n>\n> + * Subplans would previously be indexed 0..(n_total_subplans - 1) should be\n> + * changed to index range 0..num(initially_valid_subplans).\n\nAssuming you meant to ask to write this without the odd notation, I've\nexpanded the comment as follows:\n\n- * Subplans would previously be indexed 0..(n_total_subplans - 1) should be\n- * changed to index range 0..num(initially_valid_subplans).\n+ * Current values of the indexes present in PartitionPruneState count all the\n+ * subplans that would be present before initial pruning was done. If initial\n+ * pruning got rid of some of the subplans, any subsequent pruning passes will\n+ * will be looking at a different set of target subplans to choose from than\n+ * those in the pre-initial-pruning set, so the maps in PartitionPruneState\n+ * containing those indexes must be updated to reflect the new indexes of\n+ * subplans in the post-initial-pruning set.\n\nI've attached only the updated 0001, though I'm still working on the\nothers to address David's comments.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 4 Apr 2022 21:55:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Apr 4, 2022 at 9:55 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sun, Apr 3, 2022 at 8:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I think the names ExecCreatePartitionPruneState and\n> > ExecInitPartitionPruning are too confusingly similar. Maybe the former\n> > should be renamed to somehow make it clear that it is a subroutine for\n> > the former.\n>\n> Ah, yes. I've taken out the \"Exec\" from the former.\n\nWhile at it, maybe it's better to rename ExecInitPruningContext() to\nInitPartitionPruneContext(), which I've done in the attached updated\npatch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 5 Apr 2022 11:29:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Apr-05, Amit Langote wrote:\n\n> While at it, maybe it's better to rename ExecInitPruningContext() to\n> InitPartitionPruneContext(), which I've done in the attached updated\n> patch.\n\nGood call. I had changed that name too, but yours seems a better\nchoice.\n\nI made a few other cosmetic changes and pushed. I'm afraid this will\ncause a few conflicts with your 0004 -- hopefully these should mostly be\nminor.\n\nOne change that's not completely cosmetic is a change in the test on\nwhether to call PartitionPruneFixSubPlanMap or not. Originally it was:\n\nif (partprune->do_exec_prune &&\n bms_num_members( ... ))\n \tdo_stuff();\n\nwhich meant that bms_num_members() is only evaluated if do_exec_prune.\nHowever, the do_exec_prune bit is an optimization (we can skip doing\nthat stuff if it's not going to be used), but the other test is more\nstrict: the stuff is completely irrelevant if no plans have been\nremoved, since the data structure does not need fixing. So I changed it\nto be like this\n\nif (bms_num_members( .. ))\n{\n\t/* can skip if it's pointless */\n\tif (do_exec_prune)\n\t\tdo_stuff();\n}\n\nI think that it is clearer to the human reader this way; and I think a\nsmart compiler may realize that the test can be reversed and avoid\ncounting bits when it's pointless.\n\nSo your 0004 patch should add the new condition to the outer if(), since\nit's a critical consideration rather than an optimization:\nif (partprune && bms_num_members())\n{\n\t/* can skip if pointless */\n\tif (do_exec_prune)\n\t\tdo_stuff()\n}\n\nNow, if we disagree and think that counting bits in the BMS when it's\ngoing to be discarded by do_exec_prune being false, then we can flip\nthat back as originally and a more explicit comment. With no evidence,\nI doubt it matters.\n\nThanks for the patch! I think the new coding is indeed a bit easier to\nfollow.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Tue, 5 Apr 2022 12:00:35 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Apr 5, 2022 at 7:00 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Apr-05, Amit Langote wrote:\n> > While at it, maybe it's better to rename ExecInitPruningContext() to\n> > InitPartitionPruneContext(), which I've done in the attached updated\n> > patch.\n>\n> Good call. I had changed that name too, but yours seems a better\n> choice.\n>\n> I made a few other cosmetic changes and pushed.\n\nThanks!\n\n> I'm afraid this will\n> cause a few conflicts with your 0004 -- hopefully these should mostly be\n> minor.\n>\n> One change that's not completely cosmetic is a change in the test on\n> whether to call PartitionPruneFixSubPlanMap or not. Originally it was:\n>\n> if (partprune->do_exec_prune &&\n> bms_num_members( ... ))\n> do_stuff();\n>\n> which meant that bms_num_members() is only evaluated if do_exec_prune.\n> However, the do_exec_prune bit is an optimization (we can skip doing\n> that stuff if it's not going to be used), but the other test is more\n> strict: the stuff is completely irrelevant if no plans have been\n> removed, since the data structure does not need fixing. So I changed it\n> to be like this\n>\n> if (bms_num_members( .. ))\n> {\n> /* can skip if it's pointless */\n> if (do_exec_prune)\n> do_stuff();\n> }\n>\n> I think that it is clearer to the human reader this way; and I think a\n> smart compiler may realize that the test can be reversed and avoid\n> counting bits when it's pointless.\n>\n> So your 0004 patch should add the new condition to the outer if(), since\n> it's a critical consideration rather than an optimization:\n> if (partprune && bms_num_members())\n> {\n> /* can skip if pointless */\n> if (do_exec_prune)\n> do_stuff()\n> }\n>\n> Now, if we disagree and think that counting bits in the BMS when it's\n> going to be discarded by do_exec_prune being false, then we can flip\n> that back as originally and a more explicit comment. With no evidence,\n> I doubt it matters.\n\nI agree that counting bits in the outer condition makes this easier to\nread, so see no problem with keeping it that way.\n\nWill post the rebased main patch soon, whose rewrite I'm close to\nbeing done with.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 21:56:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Apr 1, 2022 at 5:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Apr 1, 2022 at 5:20 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Fri, 1 Apr 2022 at 19:58, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Yes, the ExecLockRelsInfo node in the current patch, that first gets\n> > > added to the QueryDesc and subsequently to the EState of the query,\n> > > serves as that stashing place. Not sure if you've looked at\n> > > ExecLockRelInfo in detail in your review of the patch so far, but it\n> > > carries the initial pruning result in what are called\n> > > PlanInitPruningOutput nodes, which are stored in a list in\n> > > ExecLockRelsInfo and their offsets in the list are in turn stored in\n> > > an adjacent array that contains an element for every plan node in the\n> > > tree. If we go with a PlannedStmt.partpruneinfos list, then maybe we\n> > > don't need to have that array, because the Append/MergeAppend nodes\n> > > would be carrying those offsets by themselves.\n> >\n> > I saw it, just not in great detail. I saw that you had an array that\n> > was indexed by the plan node's ID. I thought that wouldn't be so good\n> > with large complex plans that we often get with partitioning\n> > workloads. That's why I mentioned using another index that you store\n> > in Append/MergeAppend that starts at 0 and increments by 1 for each\n> > node that has a PartitionPruneInfo made for it during create_plan.\n> >\n> > > Maybe a different name for ExecLockRelsInfo would be better?\n> > >\n> > > Also, given Tom's apparent dislike for carrying that in PlannedStmt,\n> > > maybe the way I have it now is fine?\n> >\n> > I think if you change how it's indexed and the other stuff then we can\n> > have another look. I think the patch will be much easier to review\n> > once the ParitionPruneInfos are moved into PlannedStmt.\n>\n> Will do, thanks.\n\nAnd here is a version like that that passes make check-world. Maybe\nstill a WIP as I think comments could use more editing.\n\nHere's how the new implementation works:\n\nAcquireExecutorLocks() calls ExecutorDoInitialPruning(), which in turn\niterates over a list of PartitionPruneInfos in a given PlannedStmt\ncoming from a CachedPlan. For each PartitionPruneInfo,\nExecPartitionDoInitialPruning() is called, which sets up\nPartitionPruneState and performs initial pruning steps present in the\nPartitionPruneInfo. The resulting bitmapsets of valid subplans, one\nfor each PartitionPruneInfo, are collected in a list and added to a\nresult node called PartitionPruneResult. It represents the result of\nperforming initial pruning on all PartitionPruneInfos found in a plan.\nA list of PartitionPruneResults is passed along with the PlannedStmt\nto the executor, which is referenced when initializing\nAppend/MergeAppend nodes.\n\nPlannedStmt.minLockRelids defined by the planner contains the RT\nindexes of all the entries in the range table minus those of the leaf\npartitions whose subplans are subject to removal due to initial\npruning. AcquireExecutoLocks() adds back the RT indexes of only those\nleaf partitions whose subplans survive ExecutorDoInitialPruning(). To\nget the leaf partition RT indexes from the PartitionPruneInfo, a new\nrti_map array is added to PartitionedRelPruneInfo.\n\nThere's only one patch this time. Patches that added partitioned_rels\nand plan_tree_walker() are no longer necessary.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Apr 2022 16:20:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Apr 6, 2022 at 4:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> And here is a version like that that passes make check-world. Maybe\n> still a WIP as I think comments could use more editing.\n>\n> Here's how the new implementation works:\n>\n> AcquireExecutorLocks() calls ExecutorDoInitialPruning(), which in turn\n> iterates over a list of PartitionPruneInfos in a given PlannedStmt\n> coming from a CachedPlan. For each PartitionPruneInfo,\n> ExecPartitionDoInitialPruning() is called, which sets up\n> PartitionPruneState and performs initial pruning steps present in the\n> PartitionPruneInfo. The resulting bitmapsets of valid subplans, one\n> for each PartitionPruneInfo, are collected in a list and added to a\n> result node called PartitionPruneResult. It represents the result of\n> performing initial pruning on all PartitionPruneInfos found in a plan.\n> A list of PartitionPruneResults is passed along with the PlannedStmt\n> to the executor, which is referenced when initializing\n> Append/MergeAppend nodes.\n>\n> PlannedStmt.minLockRelids defined by the planner contains the RT\n> indexes of all the entries in the range table minus those of the leaf\n> partitions whose subplans are subject to removal due to initial\n> pruning. AcquireExecutoLocks() adds back the RT indexes of only those\n> leaf partitions whose subplans survive ExecutorDoInitialPruning(). To\n> get the leaf partition RT indexes from the PartitionPruneInfo, a new\n> rti_map array is added to PartitionedRelPruneInfo.\n>\n> There's only one patch this time. Patches that added partitioned_rels\n> and plan_tree_walker() are no longer necessary.\n\nHere's an updated version. In Particular, I removed\npart_prune_results list from PortalData, in favor of anything that\nneeds to look at the list can instead get it from the CachedPlan\n(PortalData.cplan). This makes things better in 2 ways:\n\n* All the changes that were needed to produce the list to be pass to\nPortalDefineQuery() are now unnecessary (especially ugly ones were\nthose made to pg_plan_queries()'s interface)\n\n* The cases in which the PartitionPruneResult being added to a\nQueryDesc can be assumed to be valid is more clearly define now; it's\nthe cases where the portal's CachedPlan is also valid, that is, if the\naccompanying PlannedStmt is a cached one.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Apr 2022 17:27:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, 7 Apr 2022 at 20:28, Amit Langote <amitlangote09@gmail.com> wrote:\n> Here's an updated version. In Particular, I removed\n> part_prune_results list from PortalData, in favor of anything that\n> needs to look at the list can instead get it from the CachedPlan\n> (PortalData.cplan). This makes things better in 2 ways:\n\nThanks for making those changes.\n\nI'm not overly familiar with the data structures we use for planning\naround plans between the planner and executor, but storing the pruning\nresults in CachedPlan seems pretty bad. I see you've stashed it in\nthere and invented a new memory context to stop leaks into the cache\nmemory.\n\nSince I'm not overly familiar with these structures, I'm trying to\nimagine why you made that choice and the best I can come up with was\nthat it was the most convenient thing you had to hand inside\nCheckCachedPlan().\n\nI don't really have any great ideas right now on how to make this\nbetter. I wonder if GetCachedPlan() should be changed to return some\nstruct that wraps up the CachedPlan with some sort of executor prep\ninfo struct that we can stash the list of PartitionPruneResults in,\nand perhaps something else one day.\n\nSome lesser important stuff that I think could be done better.\n\n* Are you also able to put meaningful comments on the\nPartitionPruneResult struct in execnodes.h?\n\n* In create_append_plan() and create_merge_append_plan() you have the\nsame code to set the part_prune_index. Why not just move all that code\ninto make_partition_pruneinfo() and have make_partition_pruneinfo()\nreturn the index and append to the PlannerInfo.partPruneInfos List?\n\n* Why not forboth() here?\n\ni = 0;\nforeach(stmtlist_item, portal->stmts)\n{\nPlannedStmt *pstmt = lfirst_node(PlannedStmt, stmtlist_item);\nPartitionPruneResult *part_prune_result = part_prune_results ?\n list_nth(part_prune_results, i) :\n NULL;\n\ni++;\n\n* It would be good if ReleaseExecutorLocks() already knew the RTIs\nthat were locked. Maybe the executor prep info struct I mentioned\nabove could also store the RTIs that have been locked already and\nallow ReleaseExecutorLocks() to just iterate over those to release the\nlocks.\n\nDavid\n\n\n", "msg_date": "Fri, 8 Apr 2022 00:41:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Apr 7, 2022 at 9:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 7 Apr 2022 at 20:28, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Here's an updated version. In Particular, I removed\n> > part_prune_results list from PortalData, in favor of anything that\n> > needs to look at the list can instead get it from the CachedPlan\n> > (PortalData.cplan). This makes things better in 2 ways:\n>\n> Thanks for making those changes.\n>\n> I'm not overly familiar with the data structures we use for planning\n> around plans between the planner and executor, but storing the pruning\n> results in CachedPlan seems pretty bad. I see you've stashed it in\n> there and invented a new memory context to stop leaks into the cache\n> memory.\n>\n> Since I'm not overly familiar with these structures, I'm trying to\n> imagine why you made that choice and the best I can come up with was\n> that it was the most convenient thing you had to hand inside\n> CheckCachedPlan().\n\nYeah, it's that way because it felt convenient, though I have wondered\nif a simpler scheme that doesn't require any changes to the CachedPlan\ndata structure might be better after all. Your pointing it out has\nmade me think a bit harder on that.\n\n> I don't really have any great ideas right now on how to make this\n> better. I wonder if GetCachedPlan() should be changed to return some\n> struct that wraps up the CachedPlan with some sort of executor prep\n> info struct that we can stash the list of PartitionPruneResults in,\n> and perhaps something else one day.\n\nI think what might be better to do now is just add an output List\nparameter to GetCachedPlan() to add the PartitionPruneResult node to\ninstead of stashing them into CachedPlan as now. IMHO, we should\nleave inventing a new generic struct to the next project that will\nmake it necessary to return more information from GetCachedPlan() to\nits users. I find it hard to convincingly describe what the new\ngeneric struct really is if we invent it *now*, when it's going to\ncarry a single list whose purpose is pretty narrow.\n\nSo, I've implemented this by making the callers of GetCachedPlan()\npass a list to add the PartitionPruneResults that may be produced.\nMost callers can put that into the Portal for passing that to other\nmodules, so I have reinstated PortalData.part_prune_results. As for\nits memory management, the list and the PartitionPruneResults therein\nwill be allocated in a context that holds the Portal itself.\n\n> Some lesser important stuff that I think could be done better.\n>\n> * Are you also able to put meaningful comments on the\n> PartitionPruneResult struct in execnodes.h?\n>\n> * In create_append_plan() and create_merge_append_plan() you have the\n> same code to set the part_prune_index. Why not just move all that code\n> into make_partition_pruneinfo() and have make_partition_pruneinfo()\n> return the index and append to the PlannerInfo.partPruneInfos List?\n\nThat sounds better, so done.\n\n> * Why not forboth() here?\n>\n> i = 0;\n> foreach(stmtlist_item, portal->stmts)\n> {\n> PlannedStmt *pstmt = lfirst_node(PlannedStmt, stmtlist_item);\n> PartitionPruneResult *part_prune_result = part_prune_results ?\n> list_nth(part_prune_results, i) :\n> NULL;\n>\n> i++;\n\nBecause the PartitionPruneResult list may not always be available. To\nwit, it's only available when it is GetCachedPlan() that gave the\nportal its plan. I know this is a bit ugly, but it seems better than\nfixing all users of Portal to build a dummy list, not that it is\ntotally avoidable even in the current implementation.\n\n> * It would be good if ReleaseExecutorLocks() already knew the RTIs\n> that were locked. Maybe the executor prep info struct I mentioned\n> above could also store the RTIs that have been locked already and\n> allow ReleaseExecutorLocks() to just iterate over those to release the\n> locks.\n\nRewrote this such that ReleaseExecutorLocks() just receives a list of\nper-PlannedStmt bitmapsets containing the RT indexes of only the\nlocked entries in that plan.\n\nAttached updated patch with these changes.\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 8 Apr 2022 14:49:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, 8 Apr 2022 at 17:49, Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached updated patch with these changes.\n\nThanks for making the changes. I started looking over this patch but\nreally feel like it needs quite a few more iterations of what we've\njust been doing to get it into proper committable shape. There seems\nto be only about 40 mins to go before the freeze, so it seems very\nunrealistic that it could be made to work.\n\nI started trying to take a serious look at it this evening, but I feel\nlike I just failed to get into it deep enough to make any meaningful\nimprovements. I'd need more time to study the problem before I could\nbuild up a proper opinion on how exactly I think it should work.\n\nAnyway. I've attached a small patch that's just a few things I\nadjusted or questions while reading over your v13 patch. Some of\nthese are just me questioning your code (See XXX comments) and some I\nthink are improvements. Feel free to take the hunks that you see fit\nand drop anything you don't.\n\nDavid", "msg_date": "Fri, 8 Apr 2022 23:15:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi David,\n\nOn Fri, Apr 8, 2022 at 8:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 8 Apr 2022 at 17:49, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached updated patch with these changes.\n> Thanks for making the changes. I started looking over this patch but\n> really feel like it needs quite a few more iterations of what we've\n> just been doing to get it into proper committable shape. There seems\n> to be only about 40 mins to go before the freeze, so it seems very\n> unrealistic that it could be made to work.\n\nYeah, totally understandable.\n\n> I started trying to take a serious look at it this evening, but I feel\n> like I just failed to get into it deep enough to make any meaningful\n> improvements. I'd need more time to study the problem before I could\n> build up a proper opinion on how exactly I think it should work.\n>\n> Anyway. I've attached a small patch that's just a few things I\n> adjusted or questions while reading over your v13 patch. Some of\n> these are just me questioning your code (See XXX comments) and some I\n> think are improvements. Feel free to take the hunks that you see fit\n> and drop anything you don't.\n\nThanks a lot for compiling those.\n\nMost looked fine changes to me except a couple of typos, so I've\nadopted those into the attached new version, even though I know it's\ntoo late to try to apply it. Re the XXX comments:\n\n+ /* XXX why would pprune->rti_map[i] ever be zero here??? */\n\nYeah, no there can't be, was perhaps being overly paraioid.\n\n+ * XXX is it worth doing a bms_copy() on glob->minLockRelids if\n+ * glob->containsInitialPruning is true?. I'm slighly worried that the\n+ * Bitmapset could have a very long empty tail resulting in excessive\n+ * looping during AcquireExecutorLocks().\n+ */\n\nI guess I trust your instincts about bitmapset operation efficiency\nand what you've written here makes sense. It's typical for leaf\npartitions to have been appended toward the tail end of rtable and I'd\nimagine their indexes would be in the tail words of minLockRelids. If\ncopying the bitmapset removes those useless words, I don't see why we\nshouldn't do that. So added:\n\n+ /*\n+ * It seems worth doing a bms_copy() on glob->minLockRelids if we deleted\n+ * bit from it just above to prevent empty tail bits resulting in\n+ * inefficient looping during AcquireExecutorLocks().\n+ */\n+ if (glob->containsInitialPruning)\n+ glob->minLockRelids = bms_copy(glob->minLockRelids)\n\nNot 100% about the comment I wrote.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 8 Apr 2022 20:45:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Apr 8, 2022 at 8:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Most looked fine changes to me except a couple of typos, so I've\n> adopted those into the attached new version, even though I know it's\n> too late to try to apply it.\n>\n> + * XXX is it worth doing a bms_copy() on glob->minLockRelids if\n> + * glob->containsInitialPruning is true?. I'm slighly worried that the\n> + * Bitmapset could have a very long empty tail resulting in excessive\n> + * looping during AcquireExecutorLocks().\n> + */\n>\n> I guess I trust your instincts about bitmapset operation efficiency\n> and what you've written here makes sense. It's typical for leaf\n> partitions to have been appended toward the tail end of rtable and I'd\n> imagine their indexes would be in the tail words of minLockRelids. If\n> copying the bitmapset removes those useless words, I don't see why we\n> shouldn't do that. So added:\n>\n> + /*\n> + * It seems worth doing a bms_copy() on glob->minLockRelids if we deleted\n> + * bit from it just above to prevent empty tail bits resulting in\n> + * inefficient looping during AcquireExecutorLocks().\n> + */\n> + if (glob->containsInitialPruning)\n> + glob->minLockRelids = bms_copy(glob->minLockRelids)\n>\n> Not 100% about the comment I wrote.\n\nAnd the quoted code change missed a semicolon in the v14 that I\nhurriedly sent on Friday. (Had apparently forgotten to `git add` the\nhunk to fix that).\n\nSending v15 that fixes that to keep the cfbot green for now.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Apr 2022 12:05:19 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Sun, Apr 10, 2022 at 8:05 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Fri, Apr 8, 2022 at 8:45 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > Most looked fine changes to me except a couple of typos, so I've\n> > adopted those into the attached new version, even though I know it's\n> > too late to try to apply it.\n> >\n> > + * XXX is it worth doing a bms_copy() on glob->minLockRelids if\n> > + * glob->containsInitialPruning is true?. I'm slighly worried that the\n> > + * Bitmapset could have a very long empty tail resulting in excessive\n> > + * looping during AcquireExecutorLocks().\n> > + */\n> >\n> > I guess I trust your instincts about bitmapset operation efficiency\n> > and what you've written here makes sense. It's typical for leaf\n> > partitions to have been appended toward the tail end of rtable and I'd\n> > imagine their indexes would be in the tail words of minLockRelids. If\n> > copying the bitmapset removes those useless words, I don't see why we\n> > shouldn't do that. So added:\n> >\n> > + /*\n> > + * It seems worth doing a bms_copy() on glob->minLockRelids if we\n> deleted\n> > + * bit from it just above to prevent empty tail bits resulting in\n> > + * inefficient looping during AcquireExecutorLocks().\n> > + */\n> > + if (glob->containsInitialPruning)\n> > + glob->minLockRelids = bms_copy(glob->minLockRelids)\n> >\n> > Not 100% about the comment I wrote.\n>\n> And the quoted code change missed a semicolon in the v14 that I\n> hurriedly sent on Friday. (Had apparently forgotten to `git add` the\n> hunk to fix that).\n>\n> Sending v15 that fixes that to keep the cfbot green for now.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi,\n\n+ /* RT index of the partitione table. */\n\npartitione -> partitioned\n\nCheers\n\nOn Sun, Apr 10, 2022 at 8:05 PM Amit Langote <amitlangote09@gmail.com> wrote:On Fri, Apr 8, 2022 at 8:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Most looked fine changes to me except a couple of typos, so I've\n> adopted those into the attached new version, even though I know it's\n> too late to try to apply it.\n>\n> + * XXX is it worth doing a bms_copy() on glob->minLockRelids if\n> + * glob->containsInitialPruning is true?. I'm slighly worried that the\n> + * Bitmapset could have a very long empty tail resulting in excessive\n> + * looping during AcquireExecutorLocks().\n> + */\n>\n> I guess I trust your instincts about bitmapset operation efficiency\n> and what you've written here makes sense.  It's typical for leaf\n> partitions to have been appended toward the tail end of rtable and I'd\n> imagine their indexes would be in the tail words of minLockRelids.  If\n> copying the bitmapset removes those useless words, I don't see why we\n> shouldn't do that.  So added:\n>\n> + /*\n> + * It seems worth doing a bms_copy() on glob->minLockRelids if we deleted\n> + * bit from it just above to prevent empty tail bits resulting in\n> + * inefficient looping during AcquireExecutorLocks().\n> + */\n> + if (glob->containsInitialPruning)\n> + glob->minLockRelids = bms_copy(glob->minLockRelids)\n>\n> Not 100% about the comment I wrote.\n\nAnd the quoted code change missed a semicolon in the v14 that I\nhurriedly sent on Friday.   (Had apparently forgotten to `git add` the\nhunk to fix that).\n\nSending v15 that fixes that to keep the cfbot green for now.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi,+               /* RT index of the partitione table. */partitione -> partitionedCheers", "msg_date": "Sun, 10 Apr 2022 20:58:23 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Apr 11, 2022 at 12:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 10, 2022 at 8:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Sending v15 that fixes that to keep the cfbot green for now.\n>\n> Hi,\n>\n> + /* RT index of the partitione table. */\n>\n> partitione -> partitioned\n\nThanks, fixed.\n\nAlso, I broke this into patches:\n\n0001 contains the mechanical changes of moving PartitionPruneInfo out\nof Append/MergeAppend into a list in PlannedStmt.\n\n0002 is the main patch to \"Optimize AcquireExecutorLocks() by locking\nonly unpruned partitions\".\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 27 May 2022 17:09:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, May 27, 2022 at 1:10 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Mon, Apr 11, 2022 at 12:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Sun, Apr 10, 2022 at 8:05 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> Sending v15 that fixes that to keep the cfbot green for now.\n> >\n> > Hi,\n> >\n> > + /* RT index of the partitione table. */\n> >\n> > partitione -> partitioned\n>\n> Thanks, fixed.\n>\n> Also, I broke this into patches:\n>\n> 0001 contains the mechanical changes of moving PartitionPruneInfo out\n> of Append/MergeAppend into a list in PlannedStmt.\n>\n> 0002 is the main patch to \"Optimize AcquireExecutorLocks() by locking\n> only unpruned partitions\".\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi,\nIn the description:\n\nis made available to the actual execution via\nPartitionPruneResult, made available along with the PlannedStmt by the\n\nI think the second `made available` is redundant (can be omitted).\n\n+ * Initial pruning is performed here if needed (unless it has already been\ndone\n+ * by ExecDoInitialPruning()), and in that case only the surviving\nsubplans'\n\nI wonder if there is a typo above - I don't find ExecDoInitialPruning\neither in PG codebase or in the patches (except for this in the comment).\nI think it should be ExecutorDoInitialPruning.\n\n+ * bit from it just above to prevent empty tail bits resulting in\n\nI searched in the code base but didn't find mentioning of `empty tail bit`.\nDo you mind explaining a bit about it ?\n\nCheers\n\nOn Fri, May 27, 2022 at 1:10 AM Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Apr 11, 2022 at 12:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 10, 2022 at 8:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Sending v15 that fixes that to keep the cfbot green for now.\n>\n> Hi,\n>\n> +               /* RT index of the partitione table. */\n>\n> partitione -> partitioned\n\nThanks, fixed.\n\nAlso, I broke this into patches:\n\n0001 contains the mechanical changes of moving PartitionPruneInfo out\nof Append/MergeAppend into a list in PlannedStmt.\n\n0002 is the main patch to \"Optimize AcquireExecutorLocks() by locking\nonly unpruned partitions\".\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.comHi,In the description:is made available to the actual execution viaPartitionPruneResult, made available along with the PlannedStmt by the I think the second `made available` is redundant (can be omitted).+ * Initial pruning is performed here if needed (unless it has already been done+ * by ExecDoInitialPruning()), and in that case only the surviving subplans'I wonder if there is a typo above - I don't find ExecDoInitialPruning either in PG codebase or in the patches (except for this in the comment).I think it should be ExecutorDoInitialPruning.+    * bit from it just above to prevent empty tail bits resulting inI searched in the code base but didn't find mentioning of `empty tail bit`. Do you mind explaining a bit about it ?Cheers", "msg_date": "Fri, 27 May 2022 13:08:39 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, May 27, 2022 at 1:09 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> 0001 contains the mechanical changes of moving PartitionPruneInfo out\n> of Append/MergeAppend into a list in PlannedStmt.\n>\n> 0002 is the main patch to \"Optimize AcquireExecutorLocks() by locking\n> only unpruned partitions\".\n\nThis patchset will need to be rebased over 835d476fd21; looks like\njust a cosmetic change.\n\n--Jacob\n\n\n", "msg_date": "Tue, 5 Jul 2022 10:43:21 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Jul 6, 2022 at 2:43 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On Fri, May 27, 2022 at 1:09 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > 0001 contains the mechanical changes of moving PartitionPruneInfo out\n> > of Append/MergeAppend into a list in PlannedStmt.\n> >\n> > 0002 is the main patch to \"Optimize AcquireExecutorLocks() by locking\n> > only unpruned partitions\".\n>\n> This patchset will need to be rebased over 835d476fd21; looks like\n> just a cosmetic change.\n\nThanks for the heads up.\n\nRebased and also fixed per comments given by Zhihong Yu on May 28.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Jul 2022 11:37:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Rebased over 964d01ae90c.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Jul 2022 15:40:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Jul 13, 2022 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Rebased over 964d01ae90c.\n\nSorry, left some pointless hunks in there while rebasing. Fixed in\nthe attached.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Jul 2022 16:03:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Jul 13, 2022 at 4:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 13, 2022 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Rebased over 964d01ae90c.\n>\n> Sorry, left some pointless hunks in there while rebasing. Fixed in\n> the attached.\n\nNeeded to be rebased again, over 2d04277121f this time.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 27 Jul 2022 12:00:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Jul 26, 2022 at 11:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Needed to be rebased again, over 2d04277121f this time.\n\n0001 adds es_part_prune_result but does not use it, so maybe the\nintroduction of that field should be deferred until it's needed for\nsomething.\n\nI wonder whether it's really necessary to added the PartitionPruneInfo\nobjects to a list in PlannerInfo first and then roll them up into\nPlannerGlobal later. I know we do that for range table entries, but\nI've never quite understood why we do it that way instead of creating\na flat range table in PlannerGlobal from the start. And so by\nextension I wonder whether this table couldn't be flat from the start\nalso.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jul 2022 12:27:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jul 26, 2022 at 11:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Needed to be rebased again, over 2d04277121f this time.\n\nThanks for looking.\n\n> 0001 adds es_part_prune_result but does not use it, so maybe the\n> introduction of that field should be deferred until it's needed for\n> something.\n\nOops, looks like a mistake when breaking the patch. Will move that bit to 0002.\n\n> I wonder whether it's really necessary to added the PartitionPruneInfo\n> objects to a list in PlannerInfo first and then roll them up into\n> PlannerGlobal later. I know we do that for range table entries, but\n> I've never quite understood why we do it that way instead of creating\n> a flat range table in PlannerGlobal from the start. And so by\n> extension I wonder whether this table couldn't be flat from the start\n> also.\n\nTom may want to correct me but my understanding of why the planner\nwaits till the end of planning to start populating the PlannerGlobal\nrange table is that it is not until then that we know which subqueries\nwill be scanned by the final plan tree, so also whose range table\nentries will be included in the range table passed to the executor. I\ncan see that subquery pull-up causes a pulled-up subquery's range\ntable entries to be added into the parent's query's and all its nodes\nchanged using OffsetVarNodes() to refer to the new RT indexes. But\nfor subqueries that are not pulled up, their subplans' nodes (present\nin PlannerGlboal.subplans) would still refer to the original RT\nindexes (per range table in the corresponding PlannerGlobal.subroot),\nwhich must be fixed and the end of planning is the time to do so. Or\nmaybe that could be done when build_subplan() creates a subplan and\nadds it to PlannerGlobal.subplans, but for some reason it's not?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 13:20:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I wonder whether it's really necessary to added the PartitionPruneInfo\n>> objects to a list in PlannerInfo first and then roll them up into\n>> PlannerGlobal later. I know we do that for range table entries, but\n>> I've never quite understood why we do it that way instead of creating\n>> a flat range table in PlannerGlobal from the start. And so by\n>> extension I wonder whether this table couldn't be flat from the start\n>> also.\n\n> Tom may want to correct me but my understanding of why the planner\n> waits till the end of planning to start populating the PlannerGlobal\n> range table is that it is not until then that we know which subqueries\n> will be scanned by the final plan tree, so also whose range table\n> entries will be included in the range table passed to the executor.\n\nIt would not be profitable to flatten the range table before we've\ndone remove_useless_joins. We'd end up with useless entries from\nsubqueries that ultimately aren't there. We could perhaps do it\nafter we finish that phase, but I don't really see the point: it\nwouldn't be better than what we do now, just the same work at a\ndifferent time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 00:55:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jul 29, 2022 at 12:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It would not be profitable to flatten the range table before we've\n> done remove_useless_joins. We'd end up with useless entries from\n> subqueries that ultimately aren't there. We could perhaps do it\n> after we finish that phase, but I don't really see the point: it\n> wouldn't be better than what we do now, just the same work at a\n> different time.\n\nThat's not quite my question, though. Why do we ever build a non-flat\nrange table in the first place? Like, instead of assigning indexes\nrelative to the current subquery level, why not just assign them\nrelative to the whole query from the start? It can't really be that\nwe've done it this way because of remove_useless_joins(), because\nwe've been building separate range tables and later flattening them\nfor longer than join removal has existed as a feature.\n\nWhat bugs me is that it's very much not free. By building a bunch of\nseparate range tables and combining them later, we generate extra\nwork: we have to go back and adjust RT indexes after-the-fact. We pay\nthat overhead for every query, not just the ones that end up with some\nunused entries in the range table. And why would it matter if we did\nend up with some useless entries in the range table, anyway? If\nthere's some semantic difference, we could add a flag to mark those\nentries as needing to be ignored, which seems way better than crawling\nall over the whole tree adjusting RTIs everywhere.\n\nI don't really expect that we're ever going to change this -- and\ncertainly not on this thread. The idea of running around and replacing\nRT indexes all over the tree is deeply embedded in the system. But are\nwe really sure we want to add a second kind of index that we have to\nrun around and adjust at the same time?\n\nIf we are, so be it, I guess. It just looks really ugly and unnecessary to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 08:22:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That's not quite my question, though. Why do we ever build a non-flat\n> range table in the first place? Like, instead of assigning indexes\n> relative to the current subquery level, why not just assign them\n> relative to the whole query from the start?\n\nWe could probably make that work, but I'm skeptical that it would\nreally be an improvement overall, for a couple of reasons.\n\n(1) The need for merge-rangetables-and-renumber-Vars logic doesn't\ngo away. It just moves from setrefs.c to the rewriter, which would\nhave to do it when expanding views. This would be a net loss\nperformance-wise, I think, because setrefs.c can do it as part of a\nparsetree scan that it has to perform anyway for other housekeeping\nreasons; but the rewriter would need a brand new pass over the tree.\nAdmittedly that pass would only happen for view replacement, but\nit's still not open-and-shut that there'd be a performance win.\n\n(2) The need for varlevelsup and similar fields doesn't go away,\nI think, because we need those for semantic purposes such as\ndiscovering the query level that aggregates are associated with.\nThat means that subquery flattening still has to make a pass over\nthe tree to touch every Var's varlevelsup; so not having to adjust\nvarno at the same time would save little.\n\nI'm not sure whether I think it's a net plus or net minus that\nvarno would become effectively independent of varlevelsup.\nIt'd be different from the way we think of them now, for sure,\nand I think it'd take awhile to flush out bugs arising from such\na redefinition.\n\n> I don't really expect that we're ever going to change this -- and\n> certainly not on this thread. The idea of running around and replacing\n> RT indexes all over the tree is deeply embedded in the system. But are\n> we really sure we want to add a second kind of index that we have to\n> run around and adjust at the same time?\n\nYou probably want to avert your eyes from [1], then ;-). Although\nI'm far from convinced that the cross-list index fields currently\nproposed there are actually necessary; the cost to adjust them\nduring rangetable merging could outweigh any benefit.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 11:04:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jul 29, 2022 at 11:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We could probably make that work, but I'm skeptical that it would\n> really be an improvement overall, for a couple of reasons.\n>\n> (1) The need for merge-rangetables-and-renumber-Vars logic doesn't\n> go away. It just moves from setrefs.c to the rewriter, which would\n> have to do it when expanding views. This would be a net loss\n> performance-wise, I think, because setrefs.c can do it as part of a\n> parsetree scan that it has to perform anyway for other housekeeping\n> reasons; but the rewriter would need a brand new pass over the tree.\n> Admittedly that pass would only happen for view replacement, but\n> it's still not open-and-shut that there'd be a performance win.\n>\n> (2) The need for varlevelsup and similar fields doesn't go away,\n> I think, because we need those for semantic purposes such as\n> discovering the query level that aggregates are associated with.\n> That means that subquery flattening still has to make a pass over\n> the tree to touch every Var's varlevelsup; so not having to adjust\n> varno at the same time would save little.\n>\n> I'm not sure whether I think it's a net plus or net minus that\n> varno would become effectively independent of varlevelsup.\n> It'd be different from the way we think of them now, for sure,\n> and I think it'd take awhile to flush out bugs arising from such\n> a redefinition.\n\nInteresting. Thanks for your thoughts. I guess it's not as clear-cut\nas I thought, but I still can't help feeling like we're doing an awful\nlot of expensive rearrangement at the end of query planning.\n\nI kind of wonder whether varlevelsup is the wrong idea. Like, suppose\nwe instead handed out subquery identifiers serially, sort of like what\nwe do with SubTransactionId values. Then instead of testing whether\nvarlevelsup>0 you test whether varsubqueryid==mysubqueryid. If you\nflatten a query into its parent, you still need to adjust every var\nthat refers to the dead subquery, but you don't need to adjust vars\nthat refer to subqueries underneath it. Their level changes, but their\nidentity doesn't. Maybe that doesn't really help that much, but it's\nalways struck me as a little unfortunate that we basically test\nwhether a var is equal by testing whether the varno and varlevelsup\nare equal. That only works if you assume that you can never end up\ncomparing two vars from thoroughly unrelated parts of the tree, such\nthat the subquery one level up from one might be different from the\nsubquery one level up from the other.\n\n> > I don't really expect that we're ever going to change this -- and\n> > certainly not on this thread. The idea of running around and replacing\n> > RT indexes all over the tree is deeply embedded in the system. But are\n> > we really sure we want to add a second kind of index that we have to\n> > run around and adjust at the same time?\n>\n> You probably want to avert your eyes from [1], then ;-). Although\n> I'm far from convinced that the cross-list index fields currently\n> proposed there are actually necessary; the cost to adjust them\n> during rangetable merging could outweigh any benefit.\n\nI really like the idea of that patch overall, actually; I think\npermissions checking is a good example of something that shouldn't\nrequire walking the whole query tree but currently does. And actually,\nI think the same thing is true here: we shouldn't need to walk the\nwhole query tree to find the pruning information, but right now we do.\nI'm just uncertain whether what Amit has implemented is the\nleast-annoying way to go about it... any thoughts on that,\nspecifically as it pertains to this patch?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 11:56:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... it's\n> always struck me as a little unfortunate that we basically test\n> whether a var is equal by testing whether the varno and varlevelsup\n> are equal. That only works if you assume that you can never end up\n> comparing two vars from thoroughly unrelated parts of the tree, such\n> that the subquery one level up from one might be different from the\n> subquery one level up from the other.\n\nYeah, that's always bothered me a little as well. I've yet to see a\ncase where it causes a problem in practice. But I think that if, say,\nwe were to try to do any sort of cross-query-level optimization, then\nthe ambiguity could rise up to bite us. That might be a situation\nwhere a flat rangetable would be worth the trouble.\n\n> I'm just uncertain whether what Amit has implemented is the\n> least-annoying way to go about it... any thoughts on that,\n> specifically as it pertains to this patch?\n\nI haven't looked at this patch at all. I'll try to make some\ntime for it, but probably not today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jul 2022 12:47:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jul 29, 2022 at 12:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm just uncertain whether what Amit has implemented is the\n> > least-annoying way to go about it... any thoughts on that,\n> > specifically as it pertains to this patch?\n>\n> I haven't looked at this patch at all. I'll try to make some\n> time for it, but probably not today.\n\nOK, thanks. The preliminary patch I'm talking about here is pretty\nshort, so you could probably look at that part of it, at least, in\nsome relatively small amount of time. And I think it's also in pretty\nreasonable shape apart from this issue. But, as usual, there's the\nquestion of how well one can evaluate a preliminary patch without\nreviewing the full patch in detail.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 12:55:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jul 29, 2022 at 1:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 0001 adds es_part_prune_result but does not use it, so maybe the\n> > introduction of that field should be deferred until it's needed for\n> > something.\n>\n> Oops, looks like a mistake when breaking the patch. Will move that bit to 0002.\n\nFixed that and also noticed that I had defined PartitionPruneResult in\nthe wrong header (execnodes.h). That led to PartitionPruneResult\nnodes not being able to be written and read, because\nsrc/backend/nodes/gen_node_support.pl doesn't create _out* and _read*\nroutines for the nodes defined in execnodes.h. I moved its definition\nto plannodes.h, even though it is not actually the planner that\ninstantiates those; no other include/nodes header sounds better.\n\nOne more thing I realized is that Bitmapsets added to the List\nPartitionPruneResult.valid_subplan_offs_list are not actually\nread/write-able. That's a problem that I also faced in [1], so I\nproposed a patch there to make Bitmapset a read/write-able Node and\nmark (only) the Bitmapsets that are added into read/write-able node\ntrees with the corresponding NodeTag. I'm including that patch here\nas well (0002) for the main patch to work (pass\n-DWRITE_READ_PARSE_PLAN_TREES build tests), though it might make sense\nto discuss it in its own thread?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqH80qX1ZLx3HyHmBrOzLQeuKuGx6FzGep0F_9zw9L4PAA%40mail.gmail.com", "msg_date": "Wed, 12 Oct 2022 16:36:15 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Oct 12, 2022 at 4:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jul 29, 2022 at 1:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > 0001 adds es_part_prune_result but does not use it, so maybe the\n> > > introduction of that field should be deferred until it's needed for\n> > > something.\n> >\n> > Oops, looks like a mistake when breaking the patch. Will move that bit to 0002.\n>\n> Fixed that and also noticed that I had defined PartitionPruneResult in\n> the wrong header (execnodes.h). That led to PartitionPruneResult\n> nodes not being able to be written and read, because\n> src/backend/nodes/gen_node_support.pl doesn't create _out* and _read*\n> routines for the nodes defined in execnodes.h. I moved its definition\n> to plannodes.h, even though it is not actually the planner that\n> instantiates those; no other include/nodes header sounds better.\n>\n> One more thing I realized is that Bitmapsets added to the List\n> PartitionPruneResult.valid_subplan_offs_list are not actually\n> read/write-able. That's a problem that I also faced in [1], so I\n> proposed a patch there to make Bitmapset a read/write-able Node and\n> mark (only) the Bitmapsets that are added into read/write-able node\n> trees with the corresponding NodeTag. I'm including that patch here\n> as well (0002) for the main patch to work (pass\n> -DWRITE_READ_PARSE_PLAN_TREES build tests), though it might make sense\n> to discuss it in its own thread?\n\nHad second thoughts on the use of List of Bitmapsets for this, such\nthat the make-Bitmapset-Nodes patch is no longer needed.\n\nI had defined PartitionPruneResult such that it stood for the results\nof pruning for all PartitionPruneInfos contained in\nPlannedStmt.partPruneInfos (covering all Append/MergeAppend nodes that\ncan use partition pruning in a given plan). So, it had a List of\nBitmapset. I think it's perhaps better for PartitionPruneResult to\ncover only one PartitionPruneInfo and thus need only a Bitmapset and\nnot a List thereof, which I have implemented in the attached updated\npatch 0002. So, instead of needing to pass around a\nPartitionPruneResult with each PlannedStmt, this now passes a List of\nPartitionPruneResult with an entry for each in\nPlannedStmt.partPruneInfos.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 Oct 2022 18:29:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Oct 17, 2022 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Oct 12, 2022 at 4:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Jul 29, 2022 at 1:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > 0001 adds es_part_prune_result but does not use it, so maybe the\n> > > > introduction of that field should be deferred until it's needed for\n> > > > something.\n> > >\n> > > Oops, looks like a mistake when breaking the patch. Will move that bit to 0002.\n> >\n> > Fixed that and also noticed that I had defined PartitionPruneResult in\n> > the wrong header (execnodes.h). That led to PartitionPruneResult\n> > nodes not being able to be written and read, because\n> > src/backend/nodes/gen_node_support.pl doesn't create _out* and _read*\n> > routines for the nodes defined in execnodes.h. I moved its definition\n> > to plannodes.h, even though it is not actually the planner that\n> > instantiates those; no other include/nodes header sounds better.\n> >\n> > One more thing I realized is that Bitmapsets added to the List\n> > PartitionPruneResult.valid_subplan_offs_list are not actually\n> > read/write-able. That's a problem that I also faced in [1], so I\n> > proposed a patch there to make Bitmapset a read/write-able Node and\n> > mark (only) the Bitmapsets that are added into read/write-able node\n> > trees with the corresponding NodeTag. I'm including that patch here\n> > as well (0002) for the main patch to work (pass\n> > -DWRITE_READ_PARSE_PLAN_TREES build tests), though it might make sense\n> > to discuss it in its own thread?\n>\n> Had second thoughts on the use of List of Bitmapsets for this, such\n> that the make-Bitmapset-Nodes patch is no longer needed.\n>\n> I had defined PartitionPruneResult such that it stood for the results\n> of pruning for all PartitionPruneInfos contained in\n> PlannedStmt.partPruneInfos (covering all Append/MergeAppend nodes that\n> can use partition pruning in a given plan). So, it had a List of\n> Bitmapset. I think it's perhaps better for PartitionPruneResult to\n> cover only one PartitionPruneInfo and thus need only a Bitmapset and\n> not a List thereof, which I have implemented in the attached updated\n> patch 0002. So, instead of needing to pass around a\n> PartitionPruneResult with each PlannedStmt, this now passes a List of\n> PartitionPruneResult with an entry for each in\n> PlannedStmt.partPruneInfos.\n\nRebased over 3b2db22fe.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 Oct 2022 11:41:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Oct 27, 2022 at 11:41 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Oct 17, 2022 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Oct 12, 2022 at 4:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Fri, Jul 29, 2022 at 1:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > On Thu, Jul 28, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > 0001 adds es_part_prune_result but does not use it, so maybe the\n> > > > > introduction of that field should be deferred until it's needed for\n> > > > > something.\n> > > >\n> > > > Oops, looks like a mistake when breaking the patch. Will move that bit to 0002.\n> > >\n> > > Fixed that and also noticed that I had defined PartitionPruneResult in\n> > > the wrong header (execnodes.h). That led to PartitionPruneResult\n> > > nodes not being able to be written and read, because\n> > > src/backend/nodes/gen_node_support.pl doesn't create _out* and _read*\n> > > routines for the nodes defined in execnodes.h. I moved its definition\n> > > to plannodes.h, even though it is not actually the planner that\n> > > instantiates those; no other include/nodes header sounds better.\n> > >\n> > > One more thing I realized is that Bitmapsets added to the List\n> > > PartitionPruneResult.valid_subplan_offs_list are not actually\n> > > read/write-able. That's a problem that I also faced in [1], so I\n> > > proposed a patch there to make Bitmapset a read/write-able Node and\n> > > mark (only) the Bitmapsets that are added into read/write-able node\n> > > trees with the corresponding NodeTag. I'm including that patch here\n> > > as well (0002) for the main patch to work (pass\n> > > -DWRITE_READ_PARSE_PLAN_TREES build tests), though it might make sense\n> > > to discuss it in its own thread?\n> >\n> > Had second thoughts on the use of List of Bitmapsets for this, such\n> > that the make-Bitmapset-Nodes patch is no longer needed.\n> >\n> > I had defined PartitionPruneResult such that it stood for the results\n> > of pruning for all PartitionPruneInfos contained in\n> > PlannedStmt.partPruneInfos (covering all Append/MergeAppend nodes that\n> > can use partition pruning in a given plan). So, it had a List of\n> > Bitmapset. I think it's perhaps better for PartitionPruneResult to\n> > cover only one PartitionPruneInfo and thus need only a Bitmapset and\n> > not a List thereof, which I have implemented in the attached updated\n> > patch 0002. So, instead of needing to pass around a\n> > PartitionPruneResult with each PlannedStmt, this now passes a List of\n> > PartitionPruneResult with an entry for each in\n> > PlannedStmt.partPruneInfos.\n>\n> Rebased over 3b2db22fe.\n\nUpdated 0002 to cope with AssertArg() being removed from the tree.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Nov 2022 15:22:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Looking at 0001, I wonder if we should have a crosscheck that a\nPartitionPruneInfo you got from following an index is indeed constructed\nfor the relation that you think it is: previously, you were always sure\nthat the prune struct is for this node, because you followed a pointer\nthat was set up in the node itself. Now you only have an index, and you\nhave to trust that the index is correct.\n\nI'm not sure how to implement this, or even if it's doable at all.\nKeeping the OID of the partitioned table in the PartitionPruneInfo\nstruct is easy, but I don't know how to check it in ExecInitMergeAppend\nand ExecInitAppend.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Wed, 30 Nov 2022 19:12:01 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi Alvaro,\n\nThanks for looking at this one.\n\nOn Thu, Dec 1, 2022 at 3:12 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Looking at 0001, I wonder if we should have a crosscheck that a\n> PartitionPruneInfo you got from following an index is indeed constructed\n> for the relation that you think it is: previously, you were always sure\n> that the prune struct is for this node, because you followed a pointer\n> that was set up in the node itself. Now you only have an index, and you\n> have to trust that the index is correct.\n\nYeah, a crosscheck sounds like a good idea.\n\n> I'm not sure how to implement this, or even if it's doable at all.\n> Keeping the OID of the partitioned table in the PartitionPruneInfo\n> struct is easy, but I don't know how to check it in ExecInitMergeAppend\n> and ExecInitAppend.\n\nHmm, how about keeping the [Merge]Append's parent relation's RT index\nin the PartitionPruneInfo and passing it down to\nExecInitPartitionPruning() from ExecInit[Merge]Append() for\ncross-checking? Both Append and MergeAppend already have a\n'apprelids' field that we can save a copy of in the\nPartitionPruneInfo. Tried that in the attached delta patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Dec 2022 16:59:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Dec-01, Amit Langote wrote:\n\n> Hmm, how about keeping the [Merge]Append's parent relation's RT index\n> in the PartitionPruneInfo and passing it down to\n> ExecInitPartitionPruning() from ExecInit[Merge]Append() for\n> cross-checking? Both Append and MergeAppend already have a\n> 'apprelids' field that we can save a copy of in the\n> PartitionPruneInfo. Tried that in the attached delta patch.\n\nAh yeah, that sounds about what I was thinking. I've merged that in and\npushed to github, which had a strange pg_upgrade failure on Windows\nmentioning log files that were not captured by the CI tooling. So I\npushed another one trying to grab those files, in case it wasn't an\none-off failure. It's running now:\n https://cirrus-ci.com/task/5857239638999040\n\nIf all goes well with this run, I'll get this 0001 pushed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Investigación es lo que hago cuando no sé lo que estoy haciendo\"\n(Wernher von Braun)\n\n\n", "msg_date": "Thu, 1 Dec 2022 12:21:06 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Dec 1, 2022 at 8:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-01, Amit Langote wrote:\n> > Hmm, how about keeping the [Merge]Append's parent relation's RT index\n> > in the PartitionPruneInfo and passing it down to\n> > ExecInitPartitionPruning() from ExecInit[Merge]Append() for\n> > cross-checking? Both Append and MergeAppend already have a\n> > 'apprelids' field that we can save a copy of in the\n> > PartitionPruneInfo. Tried that in the attached delta patch.\n>\n> Ah yeah, that sounds about what I was thinking. I've merged that in and\n> pushed to github, which had a strange pg_upgrade failure on Windows\n> mentioning log files that were not captured by the CI tooling. So I\n> pushed another one trying to grab those files, in case it wasn't an\n> one-off failure. It's running now:\n> https://cirrus-ci.com/task/5857239638999040\n>\n> If all goes well with this run, I'll get this 0001 pushed.\n\nThanks for pushing 0001.\n\nRebased 0002 attached.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Dec 2022 21:43:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Dec 1, 2022 at 9:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Dec 1, 2022 at 8:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-Dec-01, Amit Langote wrote:\n> > > Hmm, how about keeping the [Merge]Append's parent relation's RT index\n> > > in the PartitionPruneInfo and passing it down to\n> > > ExecInitPartitionPruning() from ExecInit[Merge]Append() for\n> > > cross-checking? Both Append and MergeAppend already have a\n> > > 'apprelids' field that we can save a copy of in the\n> > > PartitionPruneInfo. Tried that in the attached delta patch.\n> >\n> > Ah yeah, that sounds about what I was thinking. I've merged that in and\n> > pushed to github, which had a strange pg_upgrade failure on Windows\n> > mentioning log files that were not captured by the CI tooling. So I\n> > pushed another one trying to grab those files, in case it wasn't an\n> > one-off failure. It's running now:\n> > https://cirrus-ci.com/task/5857239638999040\n> >\n> > If all goes well with this run, I'll get this 0001 pushed.\n>\n> Thanks for pushing 0001.\n>\n> Rebased 0002 attached.\n\nThought it might be good for PartitionPruneResult to also have\nroot_parent_relids that matches with the corresponding\nPartitionPruneInfo. ExecInitPartitionPruning() does a sanity check\nthat the root_parent_relids of a given pair of PartitionPrune{Info |\nResult} match.\n\nPosting the patch separately as the attached 0002, just in case you\nmight think that the extra cross-checking would be an overkill.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Dec 2022 19:40:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 2, 2022 at 7:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Dec 1, 2022 at 9:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Dec 1, 2022 at 8:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > On 2022-Dec-01, Amit Langote wrote:\n> > > > Hmm, how about keeping the [Merge]Append's parent relation's RT index\n> > > > in the PartitionPruneInfo and passing it down to\n> > > > ExecInitPartitionPruning() from ExecInit[Merge]Append() for\n> > > > cross-checking? Both Append and MergeAppend already have a\n> > > > 'apprelids' field that we can save a copy of in the\n> > > > PartitionPruneInfo. Tried that in the attached delta patch.\n> > >\n> > > Ah yeah, that sounds about what I was thinking. I've merged that in and\n> > > pushed to github, which had a strange pg_upgrade failure on Windows\n> > > mentioning log files that were not captured by the CI tooling. So I\n> > > pushed another one trying to grab those files, in case it wasn't an\n> > > one-off failure. It's running now:\n> > > https://cirrus-ci.com/task/5857239638999040\n> > >\n> > > If all goes well with this run, I'll get this 0001 pushed.\n> >\n> > Thanks for pushing 0001.\n> >\n> > Rebased 0002 attached.\n>\n> Thought it might be good for PartitionPruneResult to also have\n> root_parent_relids that matches with the corresponding\n> PartitionPruneInfo. ExecInitPartitionPruning() does a sanity check\n> that the root_parent_relids of a given pair of PartitionPrune{Info |\n> Result} match.\n>\n> Posting the patch separately as the attached 0002, just in case you\n> might think that the extra cross-checking would be an overkill.\n\nRebased over 92c4dafe1eed and fixed some factual mistakes in the\ncomment above ExecutorDoInitialPruning().\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Dec 2022 12:00:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Dec 5, 2022 at 12:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Dec 2, 2022 at 7:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Thought it might be good for PartitionPruneResult to also have\n> > root_parent_relids that matches with the corresponding\n> > PartitionPruneInfo. ExecInitPartitionPruning() does a sanity check\n> > that the root_parent_relids of a given pair of PartitionPrune{Info |\n> > Result} match.\n> >\n> > Posting the patch separately as the attached 0002, just in case you\n> > might think that the extra cross-checking would be an overkill.\n>\n> Rebased over 92c4dafe1eed and fixed some factual mistakes in the\n> comment above ExecutorDoInitialPruning().\n\nSorry, I had forgotten to git-add hunks including some cosmetic\nchanges in that one. Here's another version.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Dec 2022 15:08:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "I find the API of GetCachedPlans a little weird after this patch. I\nthink it may be better to have it return a pointer of a new struct --\none that contains both the CachedPlan pointer and the list of pruning\nresults. (As I understand, the sole caller that isn't interested in the\npruning results, SPI_plan_get_cached_plan, can be explained by the fact\nthat it knows there won't be any. So I don't think we need to worry\nabout this case?)\n\nAnd I think you should make that struct also be the last argument of\nPortalDefineQuery, so you don't need the separate\nPortalStorePartitionPruneResults function -- because as far as I can\ntell, the callers that pass a non-NULL pointer there are the exactly\nsame that later call PortalStorePartitionPruneResults.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n", "msg_date": "Tue, 6 Dec 2022 20:00:33 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Thanks for the review.\n\nOn Wed, Dec 7, 2022 at 4:00 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I find the API of GetCachedPlans a little weird after this patch. I\n> think it may be better to have it return a pointer of a new struct --\n> one that contains both the CachedPlan pointer and the list of pruning\n> results. (As I understand, the sole caller that isn't interested in the\n> pruning results, SPI_plan_get_cached_plan, can be explained by the fact\n> that it knows there won't be any. So I don't think we need to worry\n> about this case?)\n\nDavid, in his Apr 7 reply on this thread, also sounded to suggest\nsomething similar.\n\nHmm, I was / am not so sure if GetCachedPlan() should return something\nthat is not CachedPlan. An idea I had today was to replace the\npart_prune_results_list output List parameter with, say,\nQueryInitPruningResult, or something like that and put the current\nlist into that struct. Was looking at QueryEnvironment to come up\nwith *that* name. Any thoughts?\n\n> And I think you should make that struct also be the last argument of\n> PortalDefineQuery, so you don't need the separate\n> PortalStorePartitionPruneResults function -- because as far as I can\n> tell, the callers that pass a non-NULL pointer there are the exactly\n> same that later call PortalStorePartitionPruneResults.\n\nYes, it would be better to not need PortalStorePartitionPruneResults.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Dec 2022 17:26:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Dec-09, Amit Langote wrote:\n\n> On Wed, Dec 7, 2022 at 4:00 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I find the API of GetCachedPlans a little weird after this patch.\n\n> David, in his Apr 7 reply on this thread, also sounded to suggest\n> something similar.\n> \n> Hmm, I was / am not so sure if GetCachedPlan() should return something\n> that is not CachedPlan. An idea I had today was to replace the\n> part_prune_results_list output List parameter with, say,\n> QueryInitPruningResult, or something like that and put the current\n> list into that struct. Was looking at QueryEnvironment to come up\n> with *that* name. Any thoughts?\n\nRemind me again why is part_prune_results_list not part of struct\nCachedPlan then? I tried to understand that based on comments upthread,\nbut I was unable to find anything.\n\n(My first reaction to your above comment was \"well, rename GetCachedPlan\nthen, maybe to GetRunnablePlan\", but then I'm wondering if CachedPlan is\nin any way a structure that must be \"immutable\" in the way parser output\nis. Looking at the comment at the top of plancache.c it appears to me\nthat it isn't, but maybe I'm missing something.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n", "msg_date": "Fri, 9 Dec 2022 10:52:17 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 9, 2022 at 6:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-09, Amit Langote wrote:\n> > On Wed, Dec 7, 2022 at 4:00 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > I find the API of GetCachedPlans a little weird after this patch.\n>\n> > David, in his Apr 7 reply on this thread, also sounded to suggest\n> > something similar.\n> >\n> > Hmm, I was / am not so sure if GetCachedPlan() should return something\n> > that is not CachedPlan. An idea I had today was to replace the\n> > part_prune_results_list output List parameter with, say,\n> > QueryInitPruningResult, or something like that and put the current\n> > list into that struct. Was looking at QueryEnvironment to come up\n> > with *that* name. Any thoughts?\n>\n> Remind me again why is part_prune_results_list not part of struct\n> CachedPlan then? I tried to understand that based on comments upthread,\n> but I was unable to find anything.\n\nIt used to be part of CachedPlan for a brief period of time (in patch\nv12 I posted in [1]), but David, in his reply to [1], said he wasn't\nso sure that it belonged there.\n\n> (My first reaction to your above comment was \"well, rename GetCachedPlan\n> then, maybe to GetRunnablePlan\", but then I'm wondering if CachedPlan is\n> in any way a structure that must be \"immutable\" in the way parser output\n> is. Looking at the comment at the top of plancache.c it appears to me\n> that it isn't, but maybe I'm missing something.)\n\nCachedPlan *is* supposed to be read-only per the comment above\nCachedPlanSource definition:\n\n * ...If we are using a generic\n * cached plan then it is meant to be re-used across multiple executions, so\n * callers must always treat CachedPlans as read-only.\n\nFYI, there was even an idea of putting a PartitionPruneResults for a\ngiven PlannedStmt into the PlannedStmt itself [2], but PlannedStmt is\nsupposed to be read-only too [3].\n\nMaybe we need some new overarching context when invoking plancache, if\nPortal can't already be it, whose struct can be passed to\nGetCachedPlan() to put the pruning results in? Perhaps,\nGetRunnablePlan() that you floated could be a wrapper for\nGetCachedPlan(), owning that new context.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqH4qQ_YVROr7TY0jSCuGn0oHhH79_DswOdXWN5UnMCBtQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvp_DjVVkgSV24%2BUF7p_yKWeepgoo%2BW2SWLLhNmjwHTVYQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/922566.1648784745%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 9 Dec 2022 19:34:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Dec-09, Amit Langote wrote:\n\n> On Fri, Dec 9, 2022 at 6:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Remind me again why is part_prune_results_list not part of struct\n> > CachedPlan then? I tried to understand that based on comments upthread,\n> > but I was unable to find anything.\n> \n> It used to be part of CachedPlan for a brief period of time (in patch\n> v12 I posted in [1]), but David, in his reply to [1], said he wasn't\n> so sure that it belonged there.\n\nI'm not sure I necessarily agree with that. I'll have a look at v12 to\ntry and understand what was David so unhappy about.\n\n> > (My first reaction to your above comment was \"well, rename GetCachedPlan\n> > then, maybe to GetRunnablePlan\", but then I'm wondering if CachedPlan is\n> > in any way a structure that must be \"immutable\" in the way parser output\n> > is. Looking at the comment at the top of plancache.c it appears to me\n> > that it isn't, but maybe I'm missing something.)\n> \n> CachedPlan *is* supposed to be read-only per the comment above\n> CachedPlanSource definition:\n> \n> * ...If we are using a generic\n> * cached plan then it is meant to be re-used across multiple executions, so\n> * callers must always treat CachedPlans as read-only.\n\nI read that as implying that the part_prune_results_list must remain\nintact as long as no invalidations occur. Does part_prune_result_list\nreally change as a result of something other than a sinval event?\nKeep in mind that if a sinval message that touches one of the relations\nin the plan arrives, then we'll discard it and generate it afresh. I\ndon't see that the part_prune_results_list would change otherwise, but\nmaybe I misunderstand?\n\n> FYI, there was even an idea of putting a PartitionPruneResults for a\n> given PlannedStmt into the PlannedStmt itself [2], but PlannedStmt is\n> supposed to be read-only too [3].\n\nHmm, I'm not familiar with PlannedStmt lifetime, but I'm definitely not\nbetting that Tom is wrong about this.\n\n> Maybe we need some new overarching context when invoking plancache, if\n> Portal can't already be it, whose struct can be passed to\n> GetCachedPlan() to put the pruning results in? Perhaps,\n> GetRunnablePlan() that you floated could be a wrapper for\n> GetCachedPlan(), owning that new context.\n\nPerhaps that is a solution. I'm not sure.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n", "msg_date": "Fri, 9 Dec 2022 11:49:47 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 9, 2022 at 7:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-09, Amit Langote wrote:\n> > On Fri, Dec 9, 2022 at 6:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > Remind me again why is part_prune_results_list not part of struct\n> > > CachedPlan then? I tried to understand that based on comments upthread,\n> > > but I was unable to find anything.\n> >\n> > > (My first reaction to your above comment was \"well, rename GetCachedPlan\n> > > then, maybe to GetRunnablePlan\", but then I'm wondering if CachedPlan is\n> > > in any way a structure that must be \"immutable\" in the way parser output\n> > > is. Looking at the comment at the top of plancache.c it appears to me\n> > > that it isn't, but maybe I'm missing something.)\n> >\n> > CachedPlan *is* supposed to be read-only per the comment above\n> > CachedPlanSource definition:\n> >\n> > * ...If we are using a generic\n> > * cached plan then it is meant to be re-used across multiple executions, so\n> > * callers must always treat CachedPlans as read-only.\n>\n> I read that as implying that the part_prune_results_list must remain\n> intact as long as no invalidations occur. Does part_prune_result_list\n> really change as a result of something other than a sinval event?\n> Keep in mind that if a sinval message that touches one of the relations\n> in the plan arrives, then we'll discard it and generate it afresh. I\n> don't see that the part_prune_results_list would change otherwise, but\n> maybe I misunderstand?\n\nPruning will be done afresh on every fetch of a given cached plan when\nCheckCachedPlan() is called on it, so the part_prune_results_list part\nwill be discarded and rebuilt as many times as the plan is executed.\nYou'll find a description around CachedPlanSavePartitionPruneResults()\nthat's in v12.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Dec 2022 20:02:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Dec-09, Amit Langote wrote:\n\n> Pruning will be done afresh on every fetch of a given cached plan when\n> CheckCachedPlan() is called on it, so the part_prune_results_list part\n> will be discarded and rebuilt as many times as the plan is executed.\n> You'll find a description around CachedPlanSavePartitionPruneResults()\n> that's in v12.\n\nI see.\n\nIn that case, a separate container struct seems warranted.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Fri, 9 Dec 2022 12:37:44 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Dec 9, 2022 at 8:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-09, Amit Langote wrote:\n>\n> > Pruning will be done afresh on every fetch of a given cached plan when\n> > CheckCachedPlan() is called on it, so the part_prune_results_list part\n> > will be discarded and rebuilt as many times as the plan is executed.\n> > You'll find a description around CachedPlanSavePartitionPruneResults()\n> > that's in v12.\n>\n> I see.\n>\n> In that case, a separate container struct seems warranted.\n\nI thought about this today and played around with some container struct ideas.\n\nThough, I started feeling like putting all the new logic being added\nby this patch into plancache.c at the heart of GetCachedPlan() and\ntweaking its API in kind of unintuitive ways may not have been such a\ngood idea to begin with. So I started thinking again about your\nGetRunnablePlan() wrapper idea and thought maybe we could do something\nwith it. Let's say we name it GetCachedPlanLockPartitions() and put\nthe logic that does initial pruning with the new\nExecutorDoInitialPruning() in it, instead of in the normal\nGetCachedPlan() path. Any callers that call GetCachedPlan() instead\ncall GetCachedPlanLockPartitions() with either the List ** parameter\nas now or some container struct if that seems better. Whether\nGetCachedPlanLockPartitions() needs to do anything other than return\nthe CachedPlan returned by GetCachedPlan() can be decided by the\nlatter setting, say, CachedPlan.has_unlocked_partitions. That will be\ndone by AcquireExecutorLocks() when it sees containsInitialPrunnig in\nany of the PlannedStmts it sees, locking only the\nPlannedStmt.minLockRelids set (which is all relations where no pruning\nis needed!), leaving the partition locking to\nGetCachedPlanLockPartitions(). If the CachedPlan is invalidated\nduring the partition locking phase, it calls GetCachedPlan() again;\nmaybe some refactoring is needed to avoid too much useless work in\nsuch cases.\n\nThoughts?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 12 Dec 2022 20:19:19 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On 2022-Dec-12, Amit Langote wrote:\n\n> I started feeling like putting all the new logic being added\n> by this patch into plancache.c at the heart of GetCachedPlan() and\n> tweaking its API in kind of unintuitive ways may not have been such a\n> good idea to begin with. So I started thinking again about your\n> GetRunnablePlan() wrapper idea and thought maybe we could do something\n> with it. Let's say we name it GetCachedPlanLockPartitions() and put\n> the logic that does initial pruning with the new\n> ExecutorDoInitialPruning() in it, instead of in the normal\n> GetCachedPlan() path. Any callers that call GetCachedPlan() instead\n> call GetCachedPlanLockPartitions() with either the List ** parameter\n> as now or some container struct if that seems better. Whether\n> GetCachedPlanLockPartitions() needs to do anything other than return\n> the CachedPlan returned by GetCachedPlan() can be decided by the\n> latter setting, say, CachedPlan.has_unlocked_partitions. That will be\n> done by AcquireExecutorLocks() when it sees containsInitialPrunnig in\n> any of the PlannedStmts it sees, locking only the\n> PlannedStmt.minLockRelids set (which is all relations where no pruning\n> is needed!), leaving the partition locking to\n> GetCachedPlanLockPartitions().\n\nHmm. This doesn't sound totally unreasonable, except to the point David\nwas making that perhaps we may want this container struct to accomodate\nother things in the future than just the partition pruning results, so I\nthink its name (and that of the function that produces it) ought to be a\nlittle more generic than that.\n\n(I think this also answers your question on whether a List ** is better\nthan a container struct.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n", "msg_date": "Mon, 12 Dec 2022 18:24:07 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Dec 13, 2022 at 2:24 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-12, Amit Langote wrote:\n> > I started feeling like putting all the new logic being added\n> > by this patch into plancache.c at the heart of GetCachedPlan() and\n> > tweaking its API in kind of unintuitive ways may not have been such a\n> > good idea to begin with. So I started thinking again about your\n> > GetRunnablePlan() wrapper idea and thought maybe we could do something\n> > with it. Let's say we name it GetCachedPlanLockPartitions() and put\n> > the logic that does initial pruning with the new\n> > ExecutorDoInitialPruning() in it, instead of in the normal\n> > GetCachedPlan() path. Any callers that call GetCachedPlan() instead\n> > call GetCachedPlanLockPartitions() with either the List ** parameter\n> > as now or some container struct if that seems better. Whether\n> > GetCachedPlanLockPartitions() needs to do anything other than return\n> > the CachedPlan returned by GetCachedPlan() can be decided by the\n> > latter setting, say, CachedPlan.has_unlocked_partitions. That will be\n> > done by AcquireExecutorLocks() when it sees containsInitialPrunnig in\n> > any of the PlannedStmts it sees, locking only the\n> > PlannedStmt.minLockRelids set (which is all relations where no pruning\n> > is needed!), leaving the partition locking to\n> > GetCachedPlanLockPartitions().\n>\n> Hmm. This doesn't sound totally unreasonable, except to the point David\n> was making that perhaps we may want this container struct to accomodate\n> other things in the future than just the partition pruning results, so I\n> think its name (and that of the function that produces it) ought to be a\n> little more generic than that.\n>\n> (I think this also answers your question on whether a List ** is better\n> than a container struct.)\n\nOK, so here's a WIP attempt at that.\n\nI have moved the original functionality of GetCachedPlan() to\nGetCachedPlanInternal(), turning the former into a sort of controller\nas described shortly. The latter's CheckCachedPlan() part now only\nlocks the \"minimal\" set of, non-prunable, relations, making a note of\nwhether the plan contains any prunable subnodes and thus prunable\nrelations whose locking is deferred to the caller, GetCachedPlan().\nGetCachedPlan(), as a sort of controller as mentioned before, does the\npruning if needed on the minimally valid plan returned by\nGetCachedPlanInternal(), locks the partitions that survive, and redoes\nthe whole thing if the locking of partitions invalidates the plan.\n\nThe pruning results are returned through the new output parameter of\nGetCachedPlan() of type CachedPlanExtra. I named it so after much\nconsideration, because all the new logic that produces stuff to put\ninto it is a part of the plancache module and has to do with\nmanipulating a CachedPlan. (I had considered CachedPlanExecInfo to\nindicate that it contains information that is to be forwarded to the\nexecutor, though that just didn't seem to fit in plancache.h.)\n\nI have broken out a few things into a preparatory patch 0001. Mainly,\nit invents PlannedStmt.minLockRelids to replace the\nAcquireExecutorLocks()'s current loop over the range table to figure\nout the relations to lock. I also threw in a couple of pruning\nrelated non-functional changes in there to make it easier to read the\n0002, which is the main patch.\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Dec 2022 17:35:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Dec 14, 2022 at 5:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I have moved the original functionality of GetCachedPlan() to\n> GetCachedPlanInternal(), turning the former into a sort of controller\n> as described shortly. The latter's CheckCachedPlan() part now only\n> locks the \"minimal\" set of, non-prunable, relations, making a note of\n> whether the plan contains any prunable subnodes and thus prunable\n> relations whose locking is deferred to the caller, GetCachedPlan().\n> GetCachedPlan(), as a sort of controller as mentioned before, does the\n> pruning if needed on the minimally valid plan returned by\n> GetCachedPlanInternal(), locks the partitions that survive, and redoes\n> the whole thing if the locking of partitions invalidates the plan.\n\nAfter sleeping on it, I realized this doesn't have to be that\ncomplicated. Rather than turn GetCachedPlan() into a wrapper for\nhandling deferred partition locking as outlined above, I could have\nchanged it more simply as follows to get the same thing done:\n\n if (!customplan)\n {\n- if (CheckCachedPlan(plansource))\n+ bool hasUnlockedParts = false;\n+\n+ if (CheckCachedPlan(plansource, &hasUnlockedParts) &&\n+ hasUnlockedParts &&\n+ CachedPlanLockPartitions(plansource, boundParams, owner, extra))\n {\n /* We want a generic plan, and we already have a valid one */\n plan = plansource->gplan;\n\nAttached updated patch does it like that.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Dec 2022 11:33:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "This version of the patch looks not entirely unreasonable to me. I'll\nset this as Ready for Committer in case David or Tom or someone else\nwant to have a look and potentially commit it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 21 Dec 2022 11:18:46 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Dec 21, 2022 at 7:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> This version of the patch looks not entirely unreasonable to me. I'll\n> set this as Ready for Committer in case David or Tom or someone else\n> want to have a look and potentially commit it.\n\nThank you, Alvaro.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Dec 2022 19:47:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> This version of the patch looks not entirely unreasonable to me. I'll\n> set this as Ready for Committer in case David or Tom or someone else\n> want to have a look and potentially commit it.\n\nI will have a look during the January CF.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Dec 2022 10:18:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "I spent some time re-reading this whole thread, and the more I read\nthe less happy I got. We are adding a lot of complexity and introducing\ncoding hazards that will surely bite somebody someday. And after awhile\nI had what felt like an epiphany: the whole problem arises because the\nsystem is wrongly factored. We should get rid of AcquireExecutorLocks\naltogether, allowing the plancache to hand back a generic plan that\nit's not certain of the validity of, and instead integrate the\nresponsibility for acquiring locks into executor startup. It'd have\nto be optional there, since we don't need new locks in the case of\nexecuting a just-planned plan; but we can easily add another eflags\nbit (EXEC_FLAG_GET_LOCKS or so). Then there has to be a convention\nwhereby the ExecInitNode traversal can return an indicator that\n\"we failed because the plan is stale, please make a new plan\".\n\nThere are a couple reasons why this feels like a good idea:\n\n* There's no need for worry about keeping the locking decisions in sync\nwith what executor startup does.\n\n* We don't need to add the overhead proposed in the current patch to\npass forward data about what got locked/pruned. While that overhead\nis hopefully less expensive than the locks it saved acquiring, it's\nstill overhead (and in some cases the patch will fail to save acquiring\nany locks, making it certainly a net negative).\n\n* In a successfully built execution state tree, there will simply\nnot be any nodes corresponding to pruned-away, never-locked subplans.\nAs long as code like EXPLAIN follows the state tree and doesn't poke\ninto plan nodes that have no matching state, it's secure against the\nsort of problems that Robert worried about upthread.\n\nWhile I've not attempted to write any code for this, I can also\nthink of a few issues that'd have to be resolved:\n\n* We'd be pushing the responsibility for looping back and re-planning\nout to fairly high-level calling code. There are only half a dozen\ncallers of GetCachedPlan, so there's not that many places to be\ntouched; but in some of those places the subsequent executor-start call\nis not close by, so that the necessary refactoring might be pretty\npainful. I doubt there's anything insurmountable, but we'd definitely\nbe changing some fundamental APIs.\n\n* In some cases (views, at least) we need to acquire lock on relations\nthat aren't directly reflected anywhere in the plan tree. So there'd\nhave to be a separate mechanism for getting those locks and rechecking\nvalidity afterward. A list of relevant relation OIDs might be enough\nfor that.\n\n* We currently do ExecCheckPermissions() before initializing the\nplan state tree. It won't do to check permissions on relations we\nhaven't yet locked, so that responsibility would have to be moved.\nMaybe that could also be integrated into the initialization recursion?\nNot sure.\n\n* In the existing usage of AcquireExecutorLocks, if we do decide\nthat the plan is stale then we are able to release all the locks\nwe got before we go off and replan. I'm not certain if that behavior\nneeds to be preserved, but if it does then that would require some\nadditional bookkeeping in the executor.\n\n* This approach is optimizing on the assumption that we usually\nwon't need to replan, because if we do then we might waste a fair\namount of executor startup overhead before discovering we have\nto throw all that state away. I think that's clearly the right\nway to bet, but perhaps somebody else has a different view.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:39:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 20, 2023 at 4:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I spent some time re-reading this whole thread, and the more I read\n> the less happy I got.\n\nThanks a lot for your time on this.\n\n> We are adding a lot of complexity and introducing\n> coding hazards that will surely bite somebody someday. And after awhile\n> I had what felt like an epiphany: the whole problem arises because the\n> system is wrongly factored. We should get rid of AcquireExecutorLocks\n> altogether, allowing the plancache to hand back a generic plan that\n> it's not certain of the validity of, and instead integrate the\n> responsibility for acquiring locks into executor startup. It'd have\n> to be optional there, since we don't need new locks in the case of\n> executing a just-planned plan; but we can easily add another eflags\n> bit (EXEC_FLAG_GET_LOCKS or so). Then there has to be a convention\n> whereby the ExecInitNode traversal can return an indicator that\n> \"we failed because the plan is stale, please make a new plan\".\n\nInteresting. The current implementation relies on\nPlanCacheRelCallback() marking a generic CachedPlan as invalid, so\nperhaps there will have to be some sharing of state between the\nplancache and the executor for this to work?\n\n> There are a couple reasons why this feels like a good idea:\n>\n> * There's no need for worry about keeping the locking decisions in sync\n> with what executor startup does.\n>\n> * We don't need to add the overhead proposed in the current patch to\n> pass forward data about what got locked/pruned. While that overhead\n> is hopefully less expensive than the locks it saved acquiring, it's\n> still overhead (and in some cases the patch will fail to save acquiring\n> any locks, making it certainly a net negative).\n>\n> * In a successfully built execution state tree, there will simply\n> not be any nodes corresponding to pruned-away, never-locked subplans.\n> As long as code like EXPLAIN follows the state tree and doesn't poke\n> into plan nodes that have no matching state, it's secure against the\n> sort of problems that Robert worried about upthread.\n\nI think this is true with the patch as proposed too, but I was still a\nbit worried about what an ExecutorStart_hook may be doing with an\nuninitialized plan tree. Maybe we're mandating that the hook must\ncall standard_ExecutorStart() and only work with the finished\nPlanState tree?\n\n> While I've not attempted to write any code for this, I can also\n> think of a few issues that'd have to be resolved:\n>\n> * We'd be pushing the responsibility for looping back and re-planning\n> out to fairly high-level calling code. There are only half a dozen\n> callers of GetCachedPlan, so there's not that many places to be\n> touched; but in some of those places the subsequent executor-start call\n> is not close by, so that the necessary refactoring might be pretty\n> painful. I doubt there's anything insurmountable, but we'd definitely\n> be changing some fundamental APIs.\n\nYeah. I suppose mostly the same place that the current patch is\ntouching to pass around the PartitionPruneResult nodes.\n\n> * In some cases (views, at least) we need to acquire lock on relations\n> that aren't directly reflected anywhere in the plan tree. So there'd\n> have to be a separate mechanism for getting those locks and rechecking\n> validity afterward. A list of relevant relation OIDs might be enough\n> for that.\n\nHmm, a list of only the OIDs wouldn't preserve the lock mode, so maybe\na list or bitmapset of the RTIs, something along the lines of\nPlannedStmt.minLockRelids in the patch?\n\nIt perhaps even makes sense to make a special list in PlannedStmt for\nonly the views?\n\n> * We currently do ExecCheckPermissions() before initializing the\n> plan state tree. It won't do to check permissions on relations we\n> haven't yet locked, so that responsibility would have to be moved.\n> Maybe that could also be integrated into the initialization recursion?\n> Not sure.\n\nAh, I remember mentioning moving that into ExecGetRangeTableRelation()\n[1], but I guess that misses relations that are not referenced in the\nplan tree, such as views. Though maybe that's not a problem if we\ntrack views separately as mentioned above.\n\n> * In the existing usage of AcquireExecutorLocks, if we do decide\n> that the plan is stale then we are able to release all the locks\n> we got before we go off and replan. I'm not certain if that behavior\n> needs to be preserved, but if it does then that would require some\n> additional bookkeeping in the executor.\n\nI think maybe we'll want to continue to release the existing locks,\nbecause if we don't, it's possible we may keep some locks uselessly if\nreplanning might lock a different set of relations.\n\n> * This approach is optimizing on the assumption that we usually\n> won't need to replan, because if we do then we might waste a fair\n> amount of executor startup overhead before discovering we have\n> to throw all that state away. I think that's clearly the right\n> way to bet, but perhaps somebody else has a different view.\n\nNot sure if you'd like, because it would still keep the\nPartitionPruneResult business, but this will be less of a problem if\nwe do the initial pruning at the beginning of InitPlan(), followed by\nlocking, before doing anything else. We would have initialized the\nQueryDesc and the EState, but only minimally. That also keeps the\nPartitionPruneResult business local to the executor.\n\nWould you like me to hack up a PoC or are you already on that?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA%40mail.gmail.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 12:13:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Jan 20, 2023 at 4:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I had what felt like an epiphany: the whole problem arises because the\n>> system is wrongly factored. We should get rid of AcquireExecutorLocks\n>> altogether, allowing the plancache to hand back a generic plan that\n>> it's not certain of the validity of, and instead integrate the\n>> responsibility for acquiring locks into executor startup.\n\n> Interesting. The current implementation relies on\n> PlanCacheRelCallback() marking a generic CachedPlan as invalid, so\n> perhaps there will have to be some sharing of state between the\n> plancache and the executor for this to work?\n\nYeah. Thinking a little harder, I think this would have to involve\npassing a CachedPlan pointer to the executor, and what the executor\nwould do after acquiring each lock is to ask the plancache \"hey, do\nyou still think this CachedPlan entry is valid?\". In the case where\nthere's a problem, the AcceptInvalidationMessages call involved in\nlock acquisition would lead to a cache inval that clears the validity\nflag on the CachedPlan entry, and this would provide an inexpensive\nway to check if that happened.\n\nIt might be possible to incorporate this pointer into PlannedStmt\ninstead of passing it separately.\n\n>> * In a successfully built execution state tree, there will simply\n>> not be any nodes corresponding to pruned-away, never-locked subplans.\n\n> I think this is true with the patch as proposed too, but I was still a\n> bit worried about what an ExecutorStart_hook may be doing with an\n> uninitialized plan tree. Maybe we're mandating that the hook must\n> call standard_ExecutorStart() and only work with the finished\n> PlanState tree?\n\nIt would certainly be incumbent on any such hook to not touch\nnot-yet-locked parts of the plan tree. I'm not particularly concerned\nabout that sort of requirements change, because we'd be breaking APIs\nall through this area in any case.\n\n>> * In some cases (views, at least) we need to acquire lock on relations\n>> that aren't directly reflected anywhere in the plan tree. So there'd\n>> have to be a separate mechanism for getting those locks and rechecking\n>> validity afterward. A list of relevant relation OIDs might be enough\n>> for that.\n\n> Hmm, a list of only the OIDs wouldn't preserve the lock mode,\n\nGood point. I wonder if we could integrate this with the\nRTEPermissionInfo data structure?\n\n> Would you like me to hack up a PoC or are you already on that?\n\nI'm not planning to work on this myself, I was hoping you would.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 22:31:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 20, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Jan 20, 2023 at 4:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I had what felt like an epiphany: the whole problem arises because the\n> >> system is wrongly factored. We should get rid of AcquireExecutorLocks\n> >> altogether, allowing the plancache to hand back a generic plan that\n> >> it's not certain of the validity of, and instead integrate the\n> >> responsibility for acquiring locks into executor startup.\n>\n> > Interesting. The current implementation relies on\n> > PlanCacheRelCallback() marking a generic CachedPlan as invalid, so\n> > perhaps there will have to be some sharing of state between the\n> > plancache and the executor for this to work?\n>\n> Yeah. Thinking a little harder, I think this would have to involve\n> passing a CachedPlan pointer to the executor, and what the executor\n> would do after acquiring each lock is to ask the plancache \"hey, do\n> you still think this CachedPlan entry is valid?\". In the case where\n> there's a problem, the AcceptInvalidationMessages call involved in\n> lock acquisition would lead to a cache inval that clears the validity\n> flag on the CachedPlan entry, and this would provide an inexpensive\n> way to check if that happened.\n\nOK, thanks, this is useful.\n\n> It might be possible to incorporate this pointer into PlannedStmt\n> instead of passing it separately.\n\nYeah, that would be less churn. Though, I wonder if you still hold\nthat PlannedStmt should not be scribbled upon outside the planner as\nyou said upthread [1]?\n\n> >> * In a successfully built execution state tree, there will simply\n> >> not be any nodes corresponding to pruned-away, never-locked subplans.\n>\n> > I think this is true with the patch as proposed too, but I was still a\n> > bit worried about what an ExecutorStart_hook may be doing with an\n> > uninitialized plan tree. Maybe we're mandating that the hook must\n> > call standard_ExecutorStart() and only work with the finished\n> > PlanState tree?\n>\n> It would certainly be incumbent on any such hook to not touch\n> not-yet-locked parts of the plan tree. I'm not particularly concerned\n> about that sort of requirements change, because we'd be breaking APIs\n> all through this area in any case.\n\nOK. Perhaps something that should be documented around ExecutorStart().\n\n> >> * In some cases (views, at least) we need to acquire lock on relations\n> >> that aren't directly reflected anywhere in the plan tree. So there'd\n> >> have to be a separate mechanism for getting those locks and rechecking\n> >> validity afterward. A list of relevant relation OIDs might be enough\n> >> for that.\n>\n> > Hmm, a list of only the OIDs wouldn't preserve the lock mode,\n>\n> Good point. I wonder if we could integrate this with the\n> RTEPermissionInfo data structure?\n\nYou mean adding a rellockmode field to RTEPermissionInfo?\n\n> > Would you like me to hack up a PoC or are you already on that?\n>\n> I'm not planning to work on this myself, I was hoping you would.\n\nAlright, I'll try to get something out early next week. Thanks for\nall the pointers.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/922566.1648784745%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 20 Jan 2023 12:52:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Jan 20, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It might be possible to incorporate this pointer into PlannedStmt\n>> instead of passing it separately.\n\n> Yeah, that would be less churn. Though, I wonder if you still hold\n> that PlannedStmt should not be scribbled upon outside the planner as\n> you said upthread [1]?\n\nWell, the whole point of that rule is that the executor can't modify\na plancache entry. If the plancache itself sets a field in such an\nentry, that doesn't seem problematic from here.\n\nBut there's other possibilities if that bothers you; QueryDesc\ncould hold the field, for example. Also, I bet we'd want to copy\nit into EState for the main initialization recursion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 22:58:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 20, 2023 at 12:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Jan 20, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It might be possible to incorporate this pointer into PlannedStmt\n> >> instead of passing it separately.\n>\n> > Yeah, that would be less churn. Though, I wonder if you still hold\n> > that PlannedStmt should not be scribbled upon outside the planner as\n> > you said upthread [1]?\n>\n> Well, the whole point of that rule is that the executor can't modify\n> a plancache entry. If the plancache itself sets a field in such an\n> entry, that doesn't seem problematic from here.\n>\n> But there's other possibilities if that bothers you; QueryDesc\n> could hold the field, for example. Also, I bet we'd want to copy\n> it into EState for the main initialization recursion.\n\nQueryDesc sounds good to me, and yes, also a copy in EState in any case.\n\nSo I started looking at the call sites of CreateQueryDesc() and\nstopped to look at ExecParallelGetQueryDesc(). AFAICS, we wouldn't\nneed to pass the CachedPlan to a parallel worker's rerun of\nInitPlan(), because 1) it doesn't make sense to call the plancache in\na parallel worker, 2) the leader should already have taken all the\nlocks in necessary for executing a given plan subnode that it intends\nto pass to a worker in ExecInitGather(). Does that make sense?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 16:19:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 20, 2023 at 12:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Alright, I'll try to get something out early next week. Thanks for\n> all the pointers.\n\nSorry for the delay. Attached is what I've come up with so far.\n\nI didn't actually go with calling the plancache on every lock taken on\na relation, that is, in ExecGetRangeTableRelation(). One thing about\ndoing it that way that I didn't quite like (or didn't see a clean\nenough way to code) is the need to complicate the ExecInitNode()\ntraversal for handling the abrupt suspension of the ongoing setup of\nthe PlanState tree.\n\nSo, I decided to keep the current model of locking all the relations\nthat need to be locked before doing anything else in InitPlan(), much\nas how AcquireExecutorLocks() does it. A new function called from\nthe top of InitPlan that I've called ExecLockRelationsIfNeeded() does\nthat locking after performing the initial pruning in the same manner\nas the earlier patch did. That does mean that I needed to keep all\nthe adjustments of the pruning code that are required for such\nout-of-ExecInitNode() invocation of initial pruning, including those\nPartitionPruneResult to carry the result of that pruning for\nExecInitNode()-time reuse, though they no longer need be passed\nthrough many unrelated interfaces.\n\nAnyways, here's a description of the patches:\n\n0001 adjusts various call sites of ExecutorStart() to cope with the\npossibility of being asked to recreate a CachedPlan, if one is\ninvolved. The main objective here is to have as little stuff as\nsensible happen between GetCachedPlan() that returned the CachedPlan\nand ExecutorStart() so as to minimize the chances of missing cleaning\nup resources that must not be missed.\n\n0002 is preparatory refactoring to make out-of-ExecInitNode()\ninvocation of pruning possible.\n\n0003 moves the responsibility of CachedPlan validation locking into\nExecutorStart() as described above.\n\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 27 Jan 2023 16:01:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Jan 27, 2023 at 4:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jan 20, 2023 at 12:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Alright, I'll try to get something out early next week. Thanks for\n> > all the pointers.\n>\n> Sorry for the delay. Attached is what I've come up with so far.\n>\n> I didn't actually go with calling the plancache on every lock taken on\n> a relation, that is, in ExecGetRangeTableRelation(). One thing about\n> doing it that way that I didn't quite like (or didn't see a clean\n> enough way to code) is the need to complicate the ExecInitNode()\n> traversal for handling the abrupt suspension of the ongoing setup of\n> the PlanState tree.\n\nOK, I gave this one more try and attached is what I came up with.\n\nThis adds a ExecPlanStillValid(), which is called right after anything\nthat may in turn call ExecGetRangeTableRelation() which has been\ntaught to lock a relation if EXEC_FLAG_GET_LOCKS has been passed in\nEState.es_top_eflags. That includes all ExecInitNode() calls, and a\nfew other functions that call ExecGetRangeTableRelation() directly,\nsuch as ExecOpenScanRelation(). If ExecPlanStillValid() returns\nfalse, that is, if EState.es_cachedplan is found to have been\ninvalidated after a lock being taken by ExecGetRangeTableRelation(),\nwhatever funcion called it must return immediately and so must its\ncaller and so on. ExecEndPlan() seems to be able to clean up after a\npartially finished attempt of initializing a PlanState tree in this\nway. Maybe my preliminary testing didn't catch cases where pointers\nto resources that are normally put into the nodes of a PlanState tree\nare now left dangling, because a partially built PlanState tree is not\naccessible to ExecEndPlan; QueryDesc.planstate would remain NULL in\nsuch cases. Maybe there's only es_tupleTable and es_relations that\nneeds to be explicitly released and the rest is taken care of by\nresetting the ExecutorState context.\n\nOn testing, I'm afraid we're going to need something like\nsrc/test/modules/delay_execution to test that concurrent changes to\nrelation(s) in PlannedStmt.relationOids that occur somewhere between\nRevalidateCachedQuery() and InitPlan() result in the latter to be\naborted and that it is handled correctly. It seems like it is only\nthe locking of partitions (that are not present in an unplanned Query\nand thus not protected by AcquirePlannerLocks()) that can trigger\nreplanning of a CachedPlan, so any tests we write should involve\npartitions. Should this try to test as many plan shapes as possible\nthough given the uncertainty around ExecEndPlan() robustness or should\nmanual auditing suffice to be sure that nothing's broken?\n\nOn possibly needing to move permission checking to occur *after*\ntaking locks, I realized that we don't really need to, because no\nrelation that needs its permissions should be unlocked by the time we\nget to ExecCheckPermissions(); note we only check permissions of\ntables that are present in the original parse tree and\nRevalidateCachedQuery() should have locked those. I found a couple of\nexceptions to that invariant in that views sometimes appear not to be\nin the set of relations that RevalidateCachedQuery() locks. So, I\ninvented PlannedStmt.viewRelations, a list of RT indexes of view RTEs\nthat is populated in setrefs.c. ExecLockViewRelations() called before\nExecCheckPermissions() locks those.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Feb 2023 23:49:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Feb 2, 2023 at 11:49 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jan 27, 2023 at 4:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I didn't actually go with calling the plancache on every lock taken on\n> > a relation, that is, in ExecGetRangeTableRelation(). One thing about\n> > doing it that way that I didn't quite like (or didn't see a clean\n> > enough way to code) is the need to complicate the ExecInitNode()\n> > traversal for handling the abrupt suspension of the ongoing setup of\n> > the PlanState tree.\n>\n> OK, I gave this one more try and attached is what I came up with.\n>\n> This adds a ExecPlanStillValid(), which is called right after anything\n> that may in turn call ExecGetRangeTableRelation() which has been\n> taught to lock a relation if EXEC_FLAG_GET_LOCKS has been passed in\n> EState.es_top_eflags. That includes all ExecInitNode() calls, and a\n> few other functions that call ExecGetRangeTableRelation() directly,\n> such as ExecOpenScanRelation(). If ExecPlanStillValid() returns\n> false, that is, if EState.es_cachedplan is found to have been\n> invalidated after a lock being taken by ExecGetRangeTableRelation(),\n> whatever funcion called it must return immediately and so must its\n> caller and so on. ExecEndPlan() seems to be able to clean up after a\n> partially finished attempt of initializing a PlanState tree in this\n> way. Maybe my preliminary testing didn't catch cases where pointers\n> to resources that are normally put into the nodes of a PlanState tree\n> are now left dangling, because a partially built PlanState tree is not\n> accessible to ExecEndPlan; QueryDesc.planstate would remain NULL in\n> such cases. Maybe there's only es_tupleTable and es_relations that\n> needs to be explicitly released and the rest is taken care of by\n> resetting the ExecutorState context.\n\nIn the attached updated patch, I've made the functions that check\nExecPlanStillValid() to return NULL (if returning something) instead\nof returning partially initialized structs. Those partially\ninitialized structs were not being subsequently looked at anyway.\n\n> On testing, I'm afraid we're going to need something like\n> src/test/modules/delay_execution to test that concurrent changes to\n> relation(s) in PlannedStmt.relationOids that occur somewhere between\n> RevalidateCachedQuery() and InitPlan() result in the latter to be\n> aborted and that it is handled correctly. It seems like it is only\n> the locking of partitions (that are not present in an unplanned Query\n> and thus not protected by AcquirePlannerLocks()) that can trigger\n> replanning of a CachedPlan, so any tests we write should involve\n> partitions. Should this try to test as many plan shapes as possible\n> though given the uncertainty around ExecEndPlan() robustness or should\n> manual auditing suffice to be sure that nothing's broken?\n\nI've added a test case under src/modules/delay_execution by adding a\nnew ExecutorStart_hook that works similarly as\ndelay_execution_planner(). The test works by allowing a concurrent\nsession to drop an object being referenced in a cached plan being\ninitialized while the ExecutorStart_hook waits to get an advisory\nlock. The concurrent drop of the referenced object is detected during\nExecInitNode() and thus triggers replanning of the cached plan.\n\nI also fixed a bug in the ExplainExecuteQuery() while testing and some comments.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Feb 2023 22:01:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi,\n\nOn 2023-02-03 22:01:09 +0900, Amit Langote wrote:\n> I've added a test case under src/modules/delay_execution by adding a\n> new ExecutorStart_hook that works similarly as\n> delay_execution_planner(). The test works by allowing a concurrent\n> session to drop an object being referenced in a cached plan being\n> initialized while the ExecutorStart_hook waits to get an advisory\n> lock. The concurrent drop of the referenced object is detected during\n> ExecInitNode() and thus triggers replanning of the cached plan.\n> \n> I also fixed a bug in the ExplainExecuteQuery() while testing and some comments.\n\nThe tests seem to frequently hang on freebsd:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3478\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Feb 2023 10:08:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Feb 7, 2023 at 23:38 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-02-03 22:01:09 +0900, Amit Langote wrote:\n> > I've added a test case under src/modules/delay_execution by adding a\n> > new ExecutorStart_hook that works similarly as\n> > delay_execution_planner(). The test works by allowing a concurrent\n> > session to drop an object being referenced in a cached plan being\n> > initialized while the ExecutorStart_hook waits to get an advisory\n> > lock. The concurrent drop of the referenced object is detected during\n> > ExecInitNode() and thus triggers replanning of the cached plan.\n> >\n> > I also fixed a bug in the ExplainExecuteQuery() while testing and some\n> comments.\n>\n> The tests seem to frequently hang on freebsd:\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3478\n\n\nThanks for the heads up. I’ve noticed this one too, though couldn’t find\nthe testrun artifacts like I could get for some other failures (on other\ncirrus machines). Has anyone else been a similar situation?\n\n>\n> <https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3478>\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Tue, Feb 7, 2023 at 23:38 Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-02-03 22:01:09 +0900, Amit Langote wrote:\n> I've added a test case under src/modules/delay_execution by adding a\n> new ExecutorStart_hook that works similarly as\n> delay_execution_planner().  The test works by allowing a concurrent\n> session to drop an object being referenced in a cached plan being\n> initialized while the ExecutorStart_hook waits to get an advisory\n> lock.  The concurrent drop of the referenced object is detected during\n> ExecInitNode() and thus triggers replanning of the cached plan.\n> \n> I also fixed a bug in the ExplainExecuteQuery() while testing and some comments.\n\nThe tests seem to frequently hang on freebsd:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3478Thanks for the heads up.  I’ve noticed this one too, though couldn’t find the testrun artifacts like I could get for some other failures (on other cirrus machines).  Has anyone else been a similar situation?-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Feb 2023 16:01:30 +0530", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Feb 8, 2023 at 7:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 7, 2023 at 23:38 Andres Freund <andres@anarazel.de> wrote:\n>> The tests seem to frequently hang on freebsd:\n>> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3478\n>\n> Thanks for the heads up. I’ve noticed this one too, though couldn’t find the testrun artifacts like I could get for some other failures (on other cirrus machines). Has anyone else been a similar situation?\n\nI think I have figured out what might be going wrong on that cfbot\nanimal after building with the same CPPFLAGS as that animal locally.\nI had forgotten to update _out/_readRangeTblEntry() to account for the\npatch's change that a view's RTE_SUBQUERY now also preserves relkind\nin addition to relid and rellockmode for the locking consideration.\n\nAlso, I noticed that a multi-query Portal execution with rules was\nfailing (thanks to a regression test added in a7d71c41db) because of\nthe snapshot used for the 2nd query onward not being updated for\ncommand ID change under patched model of multi-query Portal execution.\nTo wit, under the patched model, all queries in the multi-query Portal\ncase undergo ExecutorStart() before any of it is run with\nExecutorRun(). The patch hadn't changed things however to update the\nsnapshot's command ID for the 2nd query onwards, which caused the\naforementioned test case to fail.\n\nThis new model does however mean that the 2nd query onwards must use\nPushCopiedSnapshot() given the current requirement of\nUpdateActiveSnapshotCommandId() that the snapshot passed to it must\nnot be referenced anywhere else. The new model basically requires\nthat each query's QueryDesc points to its own copy of the\nActiveSnapshot. That may not be a thing in favor of the patched model\nthough. For now, I haven't been able to come up with a better\nalternative.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Mar 2023 22:52:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Mar 2, 2023 at 10:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I think I have figured out what might be going wrong on that cfbot\n> animal after building with the same CPPFLAGS as that animal locally.\n> I had forgotten to update _out/_readRangeTblEntry() to account for the\n> patch's change that a view's RTE_SUBQUERY now also preserves relkind\n> in addition to relid and rellockmode for the locking consideration.\n>\n> Also, I noticed that a multi-query Portal execution with rules was\n> failing (thanks to a regression test added in a7d71c41db) because of\n> the snapshot used for the 2nd query onward not being updated for\n> command ID change under patched model of multi-query Portal execution.\n> To wit, under the patched model, all queries in the multi-query Portal\n> case undergo ExecutorStart() before any of it is run with\n> ExecutorRun(). The patch hadn't changed things however to update the\n> snapshot's command ID for the 2nd query onwards, which caused the\n> aforementioned test case to fail.\n>\n> This new model does however mean that the 2nd query onwards must use\n> PushCopiedSnapshot() given the current requirement of\n> UpdateActiveSnapshotCommandId() that the snapshot passed to it must\n> not be referenced anywhere else. The new model basically requires\n> that each query's QueryDesc points to its own copy of the\n> ActiveSnapshot. That may not be a thing in favor of the patched model\n> though. For now, I haven't been able to come up with a better\n> alternative.\n\nHere's a new version addressing the following 2 points.\n\n* Like views, I realized that non-leaf relations of partition trees\nscanned by an Append/MergeAppend would need to be locked separately,\nbecause ExecInitNode() traversal of the plan tree would not account\nfor them. That is, they are not opened using\nExecGetRangeTableRelation() or ExecOpenScanRelation(). One exception\nis that some (if not all) of those non-leaf relations may be\nreferenced in PartitionPruneInfo and so locked as part of initializing\nthe corresponding PartitionPruneState, but I decided not to complicate\nthe code to filter out such relations from the set locked separately.\nTo carry the set of relations to lock, the refactoring patch 0001\nre-introduces the List of Bitmapset field named allpartrelids into\nAppend/MergeAppend nodes, which we had previously removed on the\ngrounds that those relations need not be locked separately (commits\nf2343653f5b, f003a7522bf).\n\n* I decided to initialize QueryDesc.planstate even in the cases where\nExecInitNode() traversal is aborted in the middle on detecting\nCachedPlan invalidation such that it points to a partially initialized\nPlanState tree. My earlier thinking that each PlanState node need not\nbe visited for resource cleanup in such cases was naive after all. To\nthat end, I've fixed the ExecEndNode() subroutines of all Plan node\ntypes to account for potentially uninitialized fields. There are a\ncouple of cases where I'm a bit doubtful though. In\nExecEndCustomScan(), there's no indication in CustomScanState whether\nit's OK to call EndCustomScan() when BeginCustomScan() may not have\nbeen called. For ForeignScanState, I've assumed that\nForeignScanState.fdw_state being set can be used as a marker that\nBeginForeignScan would have been called, though maybe that's not a\nsolid assumption.\n\nI'm also attaching a new (small) patch 0003 that eliminates the\nloop-over-rangetable in ExecCloseRangeTableRelations() in favor of\niterating over a new List field of EState named es_opened_relations,\nwhich is populated by ExecGetRangeTableRelation() with only the\nrelations that were opened. This speeds up\nExecCloseRangeTableRelations() significantly for the cases with many\nruntime-prunable partitions.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Mar 2023 19:07:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Mar 14, 2023 at 7:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Mar 2, 2023 at 10:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I think I have figured out what might be going wrong on that cfbot\n> > animal after building with the same CPPFLAGS as that animal locally.\n> > I had forgotten to update _out/_readRangeTblEntry() to account for the\n> > patch's change that a view's RTE_SUBQUERY now also preserves relkind\n> > in addition to relid and rellockmode for the locking consideration.\n> >\n> > Also, I noticed that a multi-query Portal execution with rules was\n> > failing (thanks to a regression test added in a7d71c41db) because of\n> > the snapshot used for the 2nd query onward not being updated for\n> > command ID change under patched model of multi-query Portal execution.\n> > To wit, under the patched model, all queries in the multi-query Portal\n> > case undergo ExecutorStart() before any of it is run with\n> > ExecutorRun(). The patch hadn't changed things however to update the\n> > snapshot's command ID for the 2nd query onwards, which caused the\n> > aforementioned test case to fail.\n> >\n> > This new model does however mean that the 2nd query onwards must use\n> > PushCopiedSnapshot() given the current requirement of\n> > UpdateActiveSnapshotCommandId() that the snapshot passed to it must\n> > not be referenced anywhere else. The new model basically requires\n> > that each query's QueryDesc points to its own copy of the\n> > ActiveSnapshot. That may not be a thing in favor of the patched model\n> > though. For now, I haven't been able to come up with a better\n> > alternative.\n>\n> Here's a new version addressing the following 2 points.\n>\n> * Like views, I realized that non-leaf relations of partition trees\n> scanned by an Append/MergeAppend would need to be locked separately,\n> because ExecInitNode() traversal of the plan tree would not account\n> for them. That is, they are not opened using\n> ExecGetRangeTableRelation() or ExecOpenScanRelation(). One exception\n> is that some (if not all) of those non-leaf relations may be\n> referenced in PartitionPruneInfo and so locked as part of initializing\n> the corresponding PartitionPruneState, but I decided not to complicate\n> the code to filter out such relations from the set locked separately.\n> To carry the set of relations to lock, the refactoring patch 0001\n> re-introduces the List of Bitmapset field named allpartrelids into\n> Append/MergeAppend nodes, which we had previously removed on the\n> grounds that those relations need not be locked separately (commits\n> f2343653f5b, f003a7522bf).\n>\n> * I decided to initialize QueryDesc.planstate even in the cases where\n> ExecInitNode() traversal is aborted in the middle on detecting\n> CachedPlan invalidation such that it points to a partially initialized\n> PlanState tree. My earlier thinking that each PlanState node need not\n> be visited for resource cleanup in such cases was naive after all. To\n> that end, I've fixed the ExecEndNode() subroutines of all Plan node\n> types to account for potentially uninitialized fields. There are a\n> couple of cases where I'm a bit doubtful though. In\n> ExecEndCustomScan(), there's no indication in CustomScanState whether\n> it's OK to call EndCustomScan() when BeginCustomScan() may not have\n> been called. For ForeignScanState, I've assumed that\n> ForeignScanState.fdw_state being set can be used as a marker that\n> BeginForeignScan would have been called, though maybe that's not a\n> solid assumption.\n>\n> I'm also attaching a new (small) patch 0003 that eliminates the\n> loop-over-rangetable in ExecCloseRangeTableRelations() in favor of\n> iterating over a new List field of EState named es_opened_relations,\n> which is populated by ExecGetRangeTableRelation() with only the\n> relations that were opened. This speeds up\n> ExecCloseRangeTableRelations() significantly for the cases with many\n> runtime-prunable partitions.\n\nHere's another version with some cosmetic changes, like fixing some\nfactually incorrect / obsolete comments and typos that I found. I\nalso noticed that I had missed noting near some table_open() that\nlocks taken with those can't possibly invalidate a plan (such as\nlazily opened partition routing target partitions) and thus need the\ntreatment that locking during execution initialization requires.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Mar 2023 21:48:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Mar 22, 2023 at 9:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Mar 14, 2023 at 7:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Mar 2, 2023 at 10:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > I think I have figured out what might be going wrong on that cfbot\n> > > animal after building with the same CPPFLAGS as that animal locally.\n> > > I had forgotten to update _out/_readRangeTblEntry() to account for the\n> > > patch's change that a view's RTE_SUBQUERY now also preserves relkind\n> > > in addition to relid and rellockmode for the locking consideration.\n> > >\n> > > Also, I noticed that a multi-query Portal execution with rules was\n> > > failing (thanks to a regression test added in a7d71c41db) because of\n> > > the snapshot used for the 2nd query onward not being updated for\n> > > command ID change under patched model of multi-query Portal execution.\n> > > To wit, under the patched model, all queries in the multi-query Portal\n> > > case undergo ExecutorStart() before any of it is run with\n> > > ExecutorRun(). The patch hadn't changed things however to update the\n> > > snapshot's command ID for the 2nd query onwards, which caused the\n> > > aforementioned test case to fail.\n> > >\n> > > This new model does however mean that the 2nd query onwards must use\n> > > PushCopiedSnapshot() given the current requirement of\n> > > UpdateActiveSnapshotCommandId() that the snapshot passed to it must\n> > > not be referenced anywhere else. The new model basically requires\n> > > that each query's QueryDesc points to its own copy of the\n> > > ActiveSnapshot. That may not be a thing in favor of the patched model\n> > > though. For now, I haven't been able to come up with a better\n> > > alternative.\n> >\n> > Here's a new version addressing the following 2 points.\n> >\n> > * Like views, I realized that non-leaf relations of partition trees\n> > scanned by an Append/MergeAppend would need to be locked separately,\n> > because ExecInitNode() traversal of the plan tree would not account\n> > for them. That is, they are not opened using\n> > ExecGetRangeTableRelation() or ExecOpenScanRelation(). One exception\n> > is that some (if not all) of those non-leaf relations may be\n> > referenced in PartitionPruneInfo and so locked as part of initializing\n> > the corresponding PartitionPruneState, but I decided not to complicate\n> > the code to filter out such relations from the set locked separately.\n> > To carry the set of relations to lock, the refactoring patch 0001\n> > re-introduces the List of Bitmapset field named allpartrelids into\n> > Append/MergeAppend nodes, which we had previously removed on the\n> > grounds that those relations need not be locked separately (commits\n> > f2343653f5b, f003a7522bf).\n> >\n> > * I decided to initialize QueryDesc.planstate even in the cases where\n> > ExecInitNode() traversal is aborted in the middle on detecting\n> > CachedPlan invalidation such that it points to a partially initialized\n> > PlanState tree. My earlier thinking that each PlanState node need not\n> > be visited for resource cleanup in such cases was naive after all. To\n> > that end, I've fixed the ExecEndNode() subroutines of all Plan node\n> > types to account for potentially uninitialized fields. There are a\n> > couple of cases where I'm a bit doubtful though. In\n> > ExecEndCustomScan(), there's no indication in CustomScanState whether\n> > it's OK to call EndCustomScan() when BeginCustomScan() may not have\n> > been called. For ForeignScanState, I've assumed that\n> > ForeignScanState.fdw_state being set can be used as a marker that\n> > BeginForeignScan would have been called, though maybe that's not a\n> > solid assumption.\n> >\n> > I'm also attaching a new (small) patch 0003 that eliminates the\n> > loop-over-rangetable in ExecCloseRangeTableRelations() in favor of\n> > iterating over a new List field of EState named es_opened_relations,\n> > which is populated by ExecGetRangeTableRelation() with only the\n> > relations that were opened. This speeds up\n> > ExecCloseRangeTableRelations() significantly for the cases with many\n> > runtime-prunable partitions.\n>\n> Here's another version with some cosmetic changes, like fixing some\n> factually incorrect / obsolete comments and typos that I found. I\n> also noticed that I had missed noting near some table_open() that\n> locks taken with those can't possibly invalidate a plan (such as\n> lazily opened partition routing target partitions) and thus need the\n> treatment that locking during execution initialization requires.\n\nRebased over 3c05284d83b2 (\"Invent GENERIC_PLAN option for EXPLAIN.\").\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Mar 2023 17:18:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "> > On Tue, Mar 14, 2023 at 7:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > * I decided to initialize QueryDesc.planstate even in the cases where\n> > > ExecInitNode() traversal is aborted in the middle on detecting\n> > > CachedPlan invalidation such that it points to a partially initialized\n> > > PlanState tree. My earlier thinking that each PlanState node need not\n> > > be visited for resource cleanup in such cases was naive after all. To\n> > > that end, I've fixed the ExecEndNode() subroutines of all Plan node\n> > > types to account for potentially uninitialized fields. There are a\n> > > couple of cases where I'm a bit doubtful though. In\n> > > ExecEndCustomScan(), there's no indication in CustomScanState whether\n> > > it's OK to call EndCustomScan() when BeginCustomScan() may not have\n> > > been called. For ForeignScanState, I've assumed that\n> > > ForeignScanState.fdw_state being set can be used as a marker that\n> > > BeginForeignScan would have been called, though maybe that's not a\n> > > solid assumption.\n\nIt seems I hadn't noted in the ExecEndNode()'s comment that all node\ntypes' recursive subroutines need to handle the change made by this\npatch that the corresponding ExecInitNode() subroutine may now return\nearly without having initialized all state struct fields.\n\nAlso noted in the documentation for CustomScan and ForeignScan that\nthe Begin*Scan callback may not have been called at all, so the\nEnd*Scan should handle that gracefully.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Mar 2023 23:00:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> [ v38 patchset ]\n\nI spent a little bit of time looking through this, and concluded that\nit's not something I will be wanting to push into v16 at this stage.\nThe patch doesn't seem very close to being committable on its own\nterms, and even if it was now is not a great time in the dev cycle\nto be making significant executor API changes. Too much risk of\nhaving to thrash the API during beta, or even change it some more\nin v17. I suggest that we push this forward to the next CF with the\nhope of landing it early in v17.\n\nA few concrete thoughts:\n\n* I understand that your plan now is to acquire locks on all the\noriginally-named tables, then do permissions checks (which will\ninvolve only those tables), then dynamically lock just inheritance and\npartitioning child tables as we descend the plan tree. That seems\nmore or less okay to me, but it could be reflected better in the\nstructure of the patch perhaps.\n\n* In particular I don't much like the \"viewRelations\" list, which\nseems like a wart; those ought to be handled more nearly the same way\nas other RTEs. (One concrete reason why is that this scheme is going\nto result in locking views in a different order than they were locked\nduring original parsing, which perhaps could contribute to deadlocks.)\nMaybe we should store an integer list of which RTIs need to be locked\nin the early phase? Building that in the parser/rewriter would provide\na solid guide to the original locking order, so we'd be trivially sure\nof duplicating that. (It might be close enough to follow the RT list\norder, which is basically what AcquireExecutorLocks does today, but\nthis'd be more certain to do the right thing.) I'm less concerned\nabout lock order for child tables because those are just going to\nfollow the inheritance or partitioning structure.\n\n* I don't understand the need for changes like this:\n\n \t/* clean up tuple table */\n-\tExecClearTuple(node->ps.ps_ResultTupleSlot);\n+\tif (node->ps.ps_ResultTupleSlot)\n+\t\tExecClearTuple(node->ps.ps_ResultTupleSlot);\n\nISTM that the process ought to involve taking a lock (if needed)\nbefore we have built any execution state for a given plan node,\nand if we find we have to fail, returning NULL instead of a\npartially-valid planstate node. Otherwise, considerations of how\nto handle partially-valid nodes are going to metastasize into all\nsorts of places, almost certainly including EXPLAIN for instance.\nI think we ought to be able to limit the damage to \"parent nodes\nmight have NULL child links that you wouldn't have expected\".\nThat wouldn't faze ExecEndNode at all, nor most other code.\n\n* More attention is needed to comments. For example, in a couple of\nplaces in plancache.c you have removed function header comments\ndefining API details and not replaced them with any info about the new\ndetails, despite the fact that those details are more complex than the\nold.\n\n> It seems I hadn't noted in the ExecEndNode()'s comment that all node\n> types' recursive subroutines need to handle the change made by this\n> patch that the corresponding ExecInitNode() subroutine may now return\n> early without having initialized all state struct fields.\n> Also noted in the documentation for CustomScan and ForeignScan that\n> the Begin*Scan callback may not have been called at all, so the\n> End*Scan should handle that gracefully.\n\nYeah, I think we need to avoid adding such requirements. It's the\nsort of thing that would far too easily get past developer testing\nand only fail once in a blue moon in the field.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 17:41:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Apr 4, 2023 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ v38 patchset ]\n>\n> I spent a little bit of time looking through this, and concluded that\n> it's not something I will be wanting to push into v16 at this stage.\n> The patch doesn't seem very close to being committable on its own\n> terms, and even if it was now is not a great time in the dev cycle\n> to be making significant executor API changes. Too much risk of\n> having to thrash the API during beta, or even change it some more\n> in v17. I suggest that we push this forward to the next CF with the\n> hope of landing it early in v17.\n\nOK, thanks a lot for your feedback.\n\n> A few concrete thoughts:\n>\n> * I understand that your plan now is to acquire locks on all the\n> originally-named tables, then do permissions checks (which will\n> involve only those tables), then dynamically lock just inheritance and\n> partitioning child tables as we descend the plan tree.\n\nActually, with the current implementation of the patch, *all* of the\nrelations mentioned in the plan tree would get locked during the\nExecInitNode() traversal of the plan tree (and of those in\nplannedstmt->subplans), not just the inheritance child tables.\nLocking of non-child tables done by the executor after this patch is\nduplicative with AcquirePlannerLocks(), so that's something to be\nimproved.\n\n> That seems\n> more or less okay to me, but it could be reflected better in the\n> structure of the patch perhaps.\n>\n> * In particular I don't much like the \"viewRelations\" list, which\n> seems like a wart; those ought to be handled more nearly the same way\n> as other RTEs. (One concrete reason why is that this scheme is going\n> to result in locking views in a different order than they were locked\n> during original parsing, which perhaps could contribute to deadlocks.)\n> Maybe we should store an integer list of which RTIs need to be locked\n> in the early phase? Building that in the parser/rewriter would provide\n> a solid guide to the original locking order, so we'd be trivially sure\n> of duplicating that. (It might be close enough to follow the RT list\n> order, which is basically what AcquireExecutorLocks does today, but\n> this'd be more certain to do the right thing.) I'm less concerned\n> about lock order for child tables because those are just going to\n> follow the inheritance or partitioning structure.\n\nWhat you've described here sounds somewhat like what I had implemented\nin the patch versions till v31, though it used a bitmapset named\nminLockRelids that is initialized by setrefs.c. Your idea of\ninitializing a list before planning seems more appealing offhand than\nthe code I had added in setrefs.c to populate that minLockRelids\nbitmapset, which would be bms_add_range(1, list_lenth(finalrtable)),\nfollowed by bms_del_members(set-of-child-rel-rtis).\n\nI'll give your idea a try.\n\n> * I don't understand the need for changes like this:\n>\n> /* clean up tuple table */\n> - ExecClearTuple(node->ps.ps_ResultTupleSlot);\n> + if (node->ps.ps_ResultTupleSlot)\n> + ExecClearTuple(node->ps.ps_ResultTupleSlot);\n>\n> ISTM that the process ought to involve taking a lock (if needed)\n> before we have built any execution state for a given plan node,\n> and if we find we have to fail, returning NULL instead of a\n> partially-valid planstate node. Otherwise, considerations of how\n> to handle partially-valid nodes are going to metastasize into all\n> sorts of places, almost certainly including EXPLAIN for instance.\n> I think we ought to be able to limit the damage to \"parent nodes\n> might have NULL child links that you wouldn't have expected\".\n> That wouldn't faze ExecEndNode at all, nor most other code.\n\nHmm, yes, taking a lock before allocating any of the stuff to add into\nthe planstate seems like it's much easier to reason about than the\nalternative I've implemented.\n\n> * More attention is needed to comments. For example, in a couple of\n> places in plancache.c you have removed function header comments\n> defining API details and not replaced them with any info about the new\n> details, despite the fact that those details are more complex than the\n> old.\n\nOK, yeah, maybe I've added a bunch of explanations in execMain.c that\nshould perhaps have been in plancache.c.\n\n> > It seems I hadn't noted in the ExecEndNode()'s comment that all node\n> > types' recursive subroutines need to handle the change made by this\n> > patch that the corresponding ExecInitNode() subroutine may now return\n> > early without having initialized all state struct fields.\n> > Also noted in the documentation for CustomScan and ForeignScan that\n> > the Begin*Scan callback may not have been called at all, so the\n> > End*Scan should handle that gracefully.\n>\n> Yeah, I think we need to avoid adding such requirements. It's the\n> sort of thing that would far too easily get past developer testing\n> and only fail once in a blue moon in the field.\n\nOK, got it.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 22:29:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Apr 4, 2023 at 10:29 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n> On Tue, Apr 4, 2023 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > A few concrete thoughts:\n> >\n> > * I understand that your plan now is to acquire locks on all the\n> > originally-named tables, then do permissions checks (which will\n> > involve only those tables), then dynamically lock just inheritance and\n> > partitioning child tables as we descend the plan tree.\n>\n> Actually, with the current implementation of the patch, *all* of the\n> relations mentioned in the plan tree would get locked during the\n> ExecInitNode() traversal of the plan tree (and of those in\n> plannedstmt->subplans), not just the inheritance child tables.\n> Locking of non-child tables done by the executor after this patch is\n> duplicative with AcquirePlannerLocks(), so that's something to be\n> improved.\n>\n> > That seems\n> > more or less okay to me, but it could be reflected better in the\n> > structure of the patch perhaps.\n> >\n> > * In particular I don't much like the \"viewRelations\" list, which\n> > seems like a wart; those ought to be handled more nearly the same way\n> > as other RTEs. (One concrete reason why is that this scheme is going\n> > to result in locking views in a different order than they were locked\n> > during original parsing, which perhaps could contribute to deadlocks.)\n> > Maybe we should store an integer list of which RTIs need to be locked\n> > in the early phase? Building that in the parser/rewriter would provide\n> > a solid guide to the original locking order, so we'd be trivially sure\n> > of duplicating that. (It might be close enough to follow the RT list\n> > order, which is basically what AcquireExecutorLocks does today, but\n> > this'd be more certain to do the right thing.) I'm less concerned\n> > about lock order for child tables because those are just going to\n> > follow the inheritance or partitioning structure.\n>\n> What you've described here sounds somewhat like what I had implemented\n> in the patch versions till v31, though it used a bitmapset named\n> minLockRelids that is initialized by setrefs.c. Your idea of\n> initializing a list before planning seems more appealing offhand than\n> the code I had added in setrefs.c to populate that minLockRelids\n> bitmapset, which would be bms_add_range(1, list_lenth(finalrtable)),\n> followed by bms_del_members(set-of-child-rel-rtis).\n>\n> I'll give your idea a try.\n\nAfter sleeping on this, I think we perhaps don't need to remember\noriginally-named relations if only for the purpose of locking them for\nexecution. That's because, for a reused (cached) plan,\nAcquirePlannerLocks() would have taken those locks anyway.\n\nAcquirePlannerLocks() doesn't lock inheritance children because they would\nbe added to the range table by the planner, so they should be locked\nseparately for execution, if needed. I thought taking the execution-time\nlocks only when inside ExecInit[Merge]Append would work, but then we have\ncases where single-child Append/MergeAppend are stripped of the\nAppend/MergeAppend nodes by setrefs.c. Maybe we need a place to remember\nsuch child relations, that is, only in the cases where Append/MergeAppend\nelision occurs, in something maybe esoteric-sounding like\nPlannedStmt.elidedAppendChildRels or something?\n\nAnother set of child relations that are not covered by Append/MergeAppend\nchild nodes is non-leaf partitions. I've proposed adding a List of\nBitmapset field to Append/MergeAppend named 'allpartrelids' as part of this\npatchset (patch 0001) to track those for execution-time locking.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Tue, Apr 4, 2023 at 10:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Apr 4, 2023 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > A few concrete thoughts:\n> >\n> > * I understand that your plan now is to acquire locks on all the\n> > originally-named tables, then do permissions checks (which will\n> > involve only those tables), then dynamically lock just inheritance and\n> > partitioning child tables as we descend the plan tree.\n>\n> Actually, with the current implementation of the patch, *all* of the\n> relations mentioned in the plan tree would get locked during the\n> ExecInitNode() traversal of the plan tree (and of those in\n> plannedstmt->subplans), not just the inheritance child tables.\n> Locking of non-child tables done by the executor after this patch is\n> duplicative with AcquirePlannerLocks(), so that's something to be\n> improved.\n>\n> > That seems\n> > more or less okay to me, but it could be reflected better in the\n> > structure of the patch perhaps.\n> >\n> > * In particular I don't much like the \"viewRelations\" list, which\n> > seems like a wart; those ought to be handled more nearly the same way\n> > as other RTEs.  (One concrete reason why is that this scheme is going\n> > to result in locking views in a different order than they were locked\n> > during original parsing, which perhaps could contribute to deadlocks.)\n> > Maybe we should store an integer list of which RTIs need to be locked\n> > in the early phase?  Building that in the parser/rewriter would provide\n> > a solid guide to the original locking order, so we'd be trivially sure\n> > of duplicating that.  (It might be close enough to follow the RT list\n> > order, which is basically what AcquireExecutorLocks does today, but\n> > this'd be more certain to do the right thing.)  I'm less concerned\n> > about lock order for child tables because those are just going to\n> > follow the inheritance or partitioning structure.\n>\n> What you've described here sounds somewhat like what I had implemented\n> in the patch versions till v31, though it used a bitmapset named\n> minLockRelids that is initialized by setrefs.c.  Your idea of\n> initializing a list before planning seems more appealing offhand than\n> the code I had added in setrefs.c to populate that minLockRelids\n> bitmapset, which would be bms_add_range(1, list_lenth(finalrtable)),\n> followed by bms_del_members(set-of-child-rel-rtis).\n>\n> I'll give your idea a try.\n\nAfter sleeping on this, I think we perhaps don't need to remember originally-named relations if only for the purpose of locking them for execution.  That's because, for a reused (cached) plan, AcquirePlannerLocks() would have taken those locks anyway.\n\nAcquirePlannerLocks() doesn't lock inheritance children because they would be added to the range table by the planner, so they should be locked separately for execution, if needed.  I thought taking the execution-time locks only when inside ExecInit[Merge]Append would work, but then we have cases where single-child Append/MergeAppend are stripped of the Append/MergeAppend nodes by setrefs.c.  Maybe we need a place to remember such child relations, that is, only in the cases where Append/MergeAppend elision occurs, in something maybe esoteric-sounding like PlannedStmt.elidedAppendChildRels or something?\n\nAnother set of child relations that are not covered by Append/MergeAppend child nodes is non-leaf partitions.  I've proposed adding a List of Bitmapset field to Append/MergeAppend named 'allpartrelids' as part of this patchset (patch 0001) to track those for execution-time locking.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 Apr 2023 08:23:31 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Here is a new version. Summary of main changes since the last version\nthat Tom reviewed back in April:\n\n* ExecInitNode() subroutines now return NULL (as opposed to a\npartially initialized PlanState node as in the last version) upon\ndetecting that the CachedPlan that the plan tree is from is no longer\nvalid due to invalidation messages processed upon taking locks. Plan\ntree subnodes that are fully initialized till the point of detection\nare added by ExecInitNode() into a List in EState called\nes_inited_plannodes. ExecEndPlan() now iterates over that list to\nclose each one individually using ExecEndNode(). ExecEndNode() or its\nsubroutines thus no longer need to be recursive to close the child\nnodes. Also, with this design, there is no longer the possibility of\npartially initialized PlanState trees with partially initialized\nindividual PlanState nodes, so the ExecEndNode() subroutine changes\nthat were in the last version to account for partial initialization\nare not necessary.\n\n* Instead of setting EXEC_FLAG_GET_LOCKS in es_top_eflags for the\nentire duration of InitPlan(), it is now only set in ExecInitAppend()\nand ExecInitMergeAppend(), because that's where the subnodes scanning\nchild tables would be and the executor only needs to lock child tables\nto validate a CachedPlan in a race-free manner. Parent tables that\nappear in the query would have been locked by AcquirePlannerLocks().\nChild tables whose scan subnodes don't appear under Append/MergeAppend\n(due to the latter being removed by setrefs.c for there being only a\nsingle child) are identified in PlannedStmt.elidedAppendChildRelations\nand InitPlan() locks each one found there if the plan tree is from a\nCachedPlan.\n\n* There's no longer PlannedStmt.viewRelations, because view relations\nneed not be tracked separately for locking as AcquirePlannerLocks()\ncovers them.", "msg_date": "Thu, 8 Jun 2023 23:23:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "> On 8 Jun 2023, at 16:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> Here is a new version.\n\nThe local planstate variable in the hunk below is shadowing the function\nparameter planstate which cause a compiler warning:\n\n@@ -1495,18 +1556,15 @@ ExecEndPlan(PlanState *planstate, EState *estate)\n \tListCell *l;\n \n \t/*\n-\t * shut down the node-type-specific query processing\n-\t */\n-\tExecEndNode(planstate);\n-\n-\t/*\n-\t * for subplans too\n+\t * Shut down the node-type-specific query processing for all nodes that\n+\t * were initialized during InitPlan(), both in the main plan tree and those\n+\t * in subplans (es_subplanstates), if any.\n \t */\n-\tforeach(l, estate->es_subplanstates)\n+\tforeach(l, estate->es_inited_plannodes)\n \t{\n-\t\tPlanState *subplanstate = (PlanState *) lfirst(l);\n+\t\tPlanState *planstate = (PlanState *) lfirst(l);\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Jul 2023 15:27:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Jul 3, 2023 at 10:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 8 Jun 2023, at 16:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Here is a new version.\n>\n> The local planstate variable in the hunk below is shadowing the function\n> parameter planstate which cause a compiler warning:\n\nThanks Daniel for the heads up.\n\nAttached new version fixes that and contains a few other notable\nchanges. Before going into the details of those changes, let me\nreiterate in broad strokes what the patch is trying to do.\n\nThe idea is to move the locking of some tables referenced in a cached\n(generic) plan from plancache/GetCachedPlan() to the\nexecutor/ExecutorStart(). Specifically, the locking of inheritance\nchild tables. Why? Because partition pruning with \"initial pruning\nsteps\" contained in the Append/MergeAppend nodes may eliminate some\nchild tables that need not have been locked to begin with, though the\npruning can only occur during ExecutorStart().\n\nAfter applying this patch, GetCachedPlan() only locks the tables that\nare directly mentioned in the query to ensure that the\nanalyzed-rewritten-but-unplanned query tree backing a given CachedPlan\nis still valid (cf RevalidateCachedQuery()), but not the tables in the\nCachedPlan that would have been added by the planner. Tables in a\nCachePlan that would not be locked currently only include the\ninheritance child tables / partitions of the tables mentioned in the\nquery. This means that the plan trees in a given CachedPlan returned\nby GetCachedPlan() are only partially valid and are subject to\ninvalidation because concurrent sessions can possibly modify the child\ntables referenced in them before ExecutorStart() gets around to\nlocking them. If the concurrent modifications do happen,\nExecutorStart() is now equipped to detect them by way of noticing that\nthe CachedPlan is invalidated and inform the caller to discard and\nrecreate the CachedPlan. This entails changing all the call sites of\nExecutorStart() that pass it a plan tree from a CachedPlan to\nimplement the replan-and-retry-execution loop.\n\nGiven the above, ExecutorStart(), which has not needed so far to take\nany locks (except on indexes mentioned in IndexScans), now needs to\nlock child tables if executing a cached plan which contains them. In\nthe previous versions, the patch used a flag passed in\nEState.es_top_eflags to signal ExecGetRangeTableRelation() to lock the\ntable. The flag would be set in ExecInitAppend() and\nExecInitMergeAppend() for the duration of the loop that initializes\nchild subplans with the assumption that that's where the child tables\nwould be opened. But not all child subplans of Append/MergeAppend\nscan child tables (think UNION ALL queries), so this approach can\nresult in redundant locking. Worse, I needed to invent\nPlannedStmt.elidedAppendChildRelations to separately track child\ntables whose Scan nodes' parent Append/MergeAppend would be removed by\nsetrefs.c in some cases.\n\nSo, this new patch uses a flag in the RangeTblEntry itself to denote\nif the table is a child table instead of the above roundabout way.\nExecGetRangeTableRelation() can simply look at the RTE to decide\nwhether to take a lock or not. I considered adding a new bool field,\nbut noticed we already have inFromCl to track if a given RTE is for\ntable/entity directly mentioned in the query or for something added\nbehind-the-scenes into the range table as the field's description in\nparsenodes.h says. RTEs for child tables are added behind-the-scenes\nby the planner and it makes perfect sense to me to mark their inFromCl\nas false. I can't find anything that relies on the current behavior\nof inFromCl being set to the same value as the root inheritance parent\n(true). Patch 0002 makes this change for child RTEs.\n\nA few other notes:\n\n* A parallel worker does ExecutorStart() without access to the\nCachedPlan that the leader may have gotten its plan tree from. This\nmeans that parallel workers do not have the ability to detect plan\ntree invalidations. I think that's fine, because if the leader would\nhave been able to launch workers at all, it would also have gotten all\nthe locks to protect the (portion of) the plan tree that the workers\nwould be executing. I had an off-list discussion about this with\nRobert and he mentioned his concern that each parallel worker would\nhave its own view of which child subplans of a parallel Append are\n\"valid\" that depends on the result of its own evaluation of initial\npruning. So, there may be race conditions whereby a worker may try\nto execute plan nodes that are no longer valid, for example, if the\npartition a worker considers valid is not viewed as such by the leader\nand thus not locked. I shared my thoughts as to why that sounds\nunlikely at [1], though maybe I'm a bit too optimistic?\n\n* For multi-query portals, you can't now do ExecutorStart()\nimmediately followed by ExecutorRun() for each query in the portal,\nbecause ExecutorStart() may now fail to start a plan if it gets\ninvalidated. So PortalStart() now does ExecutorStart()s for all\nqueries and remembers the QueryDescs for PortalRun() then to do\nExecutorRun()s using. A consequence of this is that\nCommandCounterIncrement() now must be done between the\nExecutorStart()s of the individual plans in PortalStart() and not\nbetween the ExecutorRun()s in PortalRunMulti(). make check-world\npasses with this new arrangement, though I'm not entirely confident\nthat there are no problems lurking.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://postgr/es/m/CA+HiwqFA=swkzgGK8AmXUNFtLeEXFJwFyY3E7cTxvL46aa1OTw@mail.gmail.com", "msg_date": "Thu, 6 Jul 2023 23:29:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Jul 6, 2023 at 11:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jul 3, 2023 at 10:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > On 8 Jun 2023, at 16:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Here is a new version.\n> >\n> > The local planstate variable in the hunk below is shadowing the function\n> > parameter planstate which cause a compiler warning:\n>\n> Thanks Daniel for the heads up.\n>\n> Attached new version fixes that and contains a few other notable\n> changes. Before going into the details of those changes, let me\n> reiterate in broad strokes what the patch is trying to do.\n>\n> The idea is to move the locking of some tables referenced in a cached\n> (generic) plan from plancache/GetCachedPlan() to the\n> executor/ExecutorStart(). Specifically, the locking of inheritance\n> child tables. Why? Because partition pruning with \"initial pruning\n> steps\" contained in the Append/MergeAppend nodes may eliminate some\n> child tables that need not have been locked to begin with, though the\n> pruning can only occur during ExecutorStart().\n>\n> After applying this patch, GetCachedPlan() only locks the tables that\n> are directly mentioned in the query to ensure that the\n> analyzed-rewritten-but-unplanned query tree backing a given CachedPlan\n> is still valid (cf RevalidateCachedQuery()), but not the tables in the\n> CachedPlan that would have been added by the planner. Tables in a\n> CachePlan that would not be locked currently only include the\n> inheritance child tables / partitions of the tables mentioned in the\n> query. This means that the plan trees in a given CachedPlan returned\n> by GetCachedPlan() are only partially valid and are subject to\n> invalidation because concurrent sessions can possibly modify the child\n> tables referenced in them before ExecutorStart() gets around to\n> locking them. If the concurrent modifications do happen,\n> ExecutorStart() is now equipped to detect them by way of noticing that\n> the CachedPlan is invalidated and inform the caller to discard and\n> recreate the CachedPlan. This entails changing all the call sites of\n> ExecutorStart() that pass it a plan tree from a CachedPlan to\n> implement the replan-and-retry-execution loop.\n>\n> Given the above, ExecutorStart(), which has not needed so far to take\n> any locks (except on indexes mentioned in IndexScans), now needs to\n> lock child tables if executing a cached plan which contains them. In\n> the previous versions, the patch used a flag passed in\n> EState.es_top_eflags to signal ExecGetRangeTableRelation() to lock the\n> table. The flag would be set in ExecInitAppend() and\n> ExecInitMergeAppend() for the duration of the loop that initializes\n> child subplans with the assumption that that's where the child tables\n> would be opened. But not all child subplans of Append/MergeAppend\n> scan child tables (think UNION ALL queries), so this approach can\n> result in redundant locking. Worse, I needed to invent\n> PlannedStmt.elidedAppendChildRelations to separately track child\n> tables whose Scan nodes' parent Append/MergeAppend would be removed by\n> setrefs.c in some cases.\n>\n> So, this new patch uses a flag in the RangeTblEntry itself to denote\n> if the table is a child table instead of the above roundabout way.\n> ExecGetRangeTableRelation() can simply look at the RTE to decide\n> whether to take a lock or not. I considered adding a new bool field,\n> but noticed we already have inFromCl to track if a given RTE is for\n> table/entity directly mentioned in the query or for something added\n> behind-the-scenes into the range table as the field's description in\n> parsenodes.h says. RTEs for child tables are added behind-the-scenes\n> by the planner and it makes perfect sense to me to mark their inFromCl\n> as false. I can't find anything that relies on the current behavior\n> of inFromCl being set to the same value as the root inheritance parent\n> (true). Patch 0002 makes this change for child RTEs.\n>\n> A few other notes:\n>\n> * A parallel worker does ExecutorStart() without access to the\n> CachedPlan that the leader may have gotten its plan tree from. This\n> means that parallel workers do not have the ability to detect plan\n> tree invalidations. I think that's fine, because if the leader would\n> have been able to launch workers at all, it would also have gotten all\n> the locks to protect the (portion of) the plan tree that the workers\n> would be executing. I had an off-list discussion about this with\n> Robert and he mentioned his concern that each parallel worker would\n> have its own view of which child subplans of a parallel Append are\n> \"valid\" that depends on the result of its own evaluation of initial\n> pruning. So, there may be race conditions whereby a worker may try\n> to execute plan nodes that are no longer valid, for example, if the\n> partition a worker considers valid is not viewed as such by the leader\n> and thus not locked. I shared my thoughts as to why that sounds\n> unlikely at [1], though maybe I'm a bit too optimistic?\n>\n> * For multi-query portals, you can't now do ExecutorStart()\n> immediately followed by ExecutorRun() for each query in the portal,\n> because ExecutorStart() may now fail to start a plan if it gets\n> invalidated. So PortalStart() now does ExecutorStart()s for all\n> queries and remembers the QueryDescs for PortalRun() then to do\n> ExecutorRun()s using. A consequence of this is that\n> CommandCounterIncrement() now must be done between the\n> ExecutorStart()s of the individual plans in PortalStart() and not\n> between the ExecutorRun()s in PortalRunMulti(). make check-world\n> passes with this new arrangement, though I'm not entirely confident\n> that there are no problems lurking.\n\nIn an absolutely brown-paper-bag moment, I realized that I had not\nupdated src/backend/executor/README to reflect the changes to the\nexecutor's control flow that this patch makes. That is, after\nscrapping the old design back in January whose details *were*\nreflected in the patches before that redesign.\n\nAnyway, the attached fixes that.\n\nTom, do you think you have bandwidth in the near future to give this\nanother look? I think I've addressed the comments that you had given\nback in April, though as mentioned in the previous message, there may\nstill be some funny-looking aspects still remaining. In any case, I\nhave no intention of pressing ahead with the patch without another\ncommitter having had a chance to sign off on it.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 13 Jul 2023 21:58:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, 13 Jul 2023 at 13:59, Amit Langote <amitlangote09@gmail.com> wrote:\n> In an absolutely brown-paper-bag moment, I realized that I had not\n> updated src/backend/executor/README to reflect the changes to the\n> executor's control flow that this patch makes. That is, after\n> scrapping the old design back in January whose details *were*\n> reflected in the patches before that redesign.\n>\n> Anyway, the attached fixes that.\n>\n> Tom, do you think you have bandwidth in the near future to give this\n> another look? I think I've addressed the comments that you had given\n> back in April, though as mentioned in the previous message, there may\n> still be some funny-looking aspects still remaining. In any case, I\n> have no intention of pressing ahead with the patch without another\n> committer having had a chance to sign off on it.\n\nI've only just started taking a look at this, and my first test drive\nyields very impressive results:\n\n8192 partitions (3 runs, 10000 rows)\nHead 391.294989 382.622481 379.252236\nPatched 13088.145995 13406.135531 13431.828051\n\nLooking at your changes to README, I would like to suggest rewording\nthe following:\n\n+table during planning. This means that inheritance child tables, which are\n+added to the query's range table during planning, if they are present in a\n+cached plan tree would not have been locked.\n\nTo:\n\nThis means that inheritance child tables present in a cached plan\ntree, which are added to the query's range table during planning,\nwould not have been locked.\n\nAlso, further down:\n\ns/intiatialize/initialize/\n\nI'll carry on taking a closer look and see if I can break it.\n\nThom\n\n\n", "msg_date": "Mon, 17 Jul 2023 17:32:51 +0100", "msg_from": "Thom Brown <thom@linux.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi Thom,\n\nOn Tue, Jul 18, 2023 at 1:33 AM Thom Brown <thom@linux.com> wrote:\n> On Thu, 13 Jul 2023 at 13:59, Amit Langote <amitlangote09@gmail.com> wrote:\n> > In an absolutely brown-paper-bag moment, I realized that I had not\n> > updated src/backend/executor/README to reflect the changes to the\n> > executor's control flow that this patch makes. That is, after\n> > scrapping the old design back in January whose details *were*\n> > reflected in the patches before that redesign.\n> >\n> > Anyway, the attached fixes that.\n> >\n> > Tom, do you think you have bandwidth in the near future to give this\n> > another look? I think I've addressed the comments that you had given\n> > back in April, though as mentioned in the previous message, there may\n> > still be some funny-looking aspects still remaining. In any case, I\n> > have no intention of pressing ahead with the patch without another\n> > committer having had a chance to sign off on it.\n>\n> I've only just started taking a look at this, and my first test drive\n> yields very impressive results:\n>\n> 8192 partitions (3 runs, 10000 rows)\n> Head 391.294989 382.622481 379.252236\n> Patched 13088.145995 13406.135531 13431.828051\n\nJust to be sure, did you use pgbench --Mprepared with plan_cache_mode\n= force_generic_plan in postgresql.conf?\n\n> Looking at your changes to README, I would like to suggest rewording\n> the following:\n>\n> +table during planning. This means that inheritance child tables, which are\n> +added to the query's range table during planning, if they are present in a\n> +cached plan tree would not have been locked.\n>\n> To:\n>\n> This means that inheritance child tables present in a cached plan\n> tree, which are added to the query's range table during planning,\n> would not have been locked.\n>\n> Also, further down:\n>\n> s/intiatialize/initialize/\n>\n> I'll carry on taking a closer look and see if I can break it.\n\nThanks for looking. I've fixed these issues in the attached updated\npatch. I've also changed the position of a newly added paragraph in\nsrc/backend/executor/README so that it doesn't break the flow of the\nexisting text.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Jul 2023 16:26:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, 18 Jul 2023, 08:26 Amit Langote, <amitlangote09@gmail.com> wrote:\n\n> Hi Thom,\n>\n> On Tue, Jul 18, 2023 at 1:33 AM Thom Brown <thom@linux.com> wrote:\n> > On Thu, 13 Jul 2023 at 13:59, Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > > In an absolutely brown-paper-bag moment, I realized that I had not\n> > > updated src/backend/executor/README to reflect the changes to the\n> > > executor's control flow that this patch makes. That is, after\n> > > scrapping the old design back in January whose details *were*\n> > > reflected in the patches before that redesign.\n> > >\n> > > Anyway, the attached fixes that.\n> > >\n> > > Tom, do you think you have bandwidth in the near future to give this\n> > > another look? I think I've addressed the comments that you had given\n> > > back in April, though as mentioned in the previous message, there may\n> > > still be some funny-looking aspects still remaining. In any case, I\n> > > have no intention of pressing ahead with the patch without another\n> > > committer having had a chance to sign off on it.\n> >\n> > I've only just started taking a look at this, and my first test drive\n> > yields very impressive results:\n> >\n> > 8192 partitions (3 runs, 10000 rows)\n> > Head 391.294989 382.622481 379.252236\n> > Patched 13088.145995 13406.135531 13431.828051\n>\n> Just to be sure, did you use pgbench --Mprepared with plan_cache_mode\n> = force_generic_plan in postgresql.conf?\n>\n\nI did.\n\nFor full disclosure, I also had max_locks_per_transaction set to 10000.\n\n>\n> > Looking at your changes to README, I would like to suggest rewording\n> > the following:\n> >\n> > +table during planning. This means that inheritance child tables, which\n> are\n> > +added to the query's range table during planning, if they are present\n> in a\n> > +cached plan tree would not have been locked.\n> >\n> > To:\n> >\n> > This means that inheritance child tables present in a cached plan\n> > tree, which are added to the query's range table during planning,\n> > would not have been locked.\n> >\n> > Also, further down:\n> >\n> > s/intiatialize/initialize/\n> >\n> > I'll carry on taking a closer look and see if I can break it.\n>\n> Thanks for looking. I've fixed these issues in the attached updated\n> patch. I've also changed the position of a newly added paragraph in\n> src/backend/executor/README so that it doesn't break the flow of the\n> existing text.\n>\n\nThanks.\n\nThom\n\n>\n\nOn Tue, 18 Jul 2023, 08:26 Amit Langote, <amitlangote09@gmail.com> wrote:Hi Thom,\n\nOn Tue, Jul 18, 2023 at 1:33 AM Thom Brown <thom@linux.com> wrote:\n> On Thu, 13 Jul 2023 at 13:59, Amit Langote <amitlangote09@gmail.com> wrote:\n> > In an absolutely brown-paper-bag moment, I realized that I had not\n> > updated src/backend/executor/README to reflect the changes to the\n> > executor's control flow that this patch makes.   That is, after\n> > scrapping the old design back in January whose details *were*\n> > reflected in the patches before that redesign.\n> >\n> > Anyway, the attached fixes that.\n> >\n> > Tom, do you think you have bandwidth in the near future to give this\n> > another look?  I think I've addressed the comments that you had given\n> > back in April, though as mentioned in the previous message, there may\n> > still be some funny-looking aspects still remaining.  In any case, I\n> > have no intention of pressing ahead with the patch without another\n> > committer having had a chance to sign off on it.\n>\n> I've only just started taking a look at this, and my first test drive\n> yields very impressive results:\n>\n> 8192 partitions (3 runs, 10000 rows)\n> Head 391.294989 382.622481 379.252236\n> Patched 13088.145995 13406.135531 13431.828051\n\nJust to be sure, did you use pgbench --Mprepared with plan_cache_mode\n= force_generic_plan in postgresql.conf?I did.For full disclosure, I also had max_locks_per_transaction set to 10000.\n\n> Looking at your changes to README, I would like to suggest rewording\n> the following:\n>\n> +table during planning.  This means that inheritance child tables, which are\n> +added to the query's range table during planning, if they are present in a\n> +cached plan tree would not have been locked.\n>\n> To:\n>\n> This means that inheritance child tables present in a cached plan\n> tree, which are added to the query's range table during planning,\n> would not have been locked.\n>\n> Also, further down:\n>\n> s/intiatialize/initialize/\n>\n> I'll carry on taking a closer look and see if I can break it.\n\nThanks for looking.  I've fixed these issues in the attached updated\npatch.  I've also changed the position of a newly added paragraph in\nsrc/backend/executor/README so that it doesn't break the flow of the\nexisting text.Thanks.Thom", "msg_date": "Tue, 18 Jul 2023 09:36:55 +0100", "msg_from": "Thom Brown <thom@linux.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "While chatting with Robert about this patch set, he suggested that it\nwould be better to break out some executor refactoring changes from\nthe main patch (0003) into a separate patch. To wit, the changes to\nmake the PlanState tree cleanup in ExecEndPlan() non-recursive by\nwalking a flat list of PlanState nodes instead of the recursive tree\nwalk that ExecEndNode() currently does. That allows us to cleanly\nhandle the cases where the PlanState tree is only partially\nconstructed when ExecInitNode() detects in the middle of its\nconstruction that the plan tree is no longer valid after receiving and\nprocessing an invalidation message on locking child tables. Or at\nleast more cleanly than the previously proposed approach of adjusting\nExecEndNode() subroutines for the individual node types to gracefully\nhandle such partially initialized PlanState trees.\n\nWith the new approach, node type specific subroutines of ExecEndNode()\nneed not close its child nodes, because ExecEndPlan() would close each\nnode that would have been initialized directly. I couldn't find any\ninstance of breakage by this decoupling of child node cleanup from\ntheir parent node's cleanup. Comments in ExecEndGather() and\nExecEndGatherMerge() appear to suggest that outerPlan must be closed\nbefore the local cleanup:\n\n void\n ExecEndGather(GatherState *node)\n {\n- ExecEndNode(outerPlanState(node)); /* let children clean up first */\n+ /* outerPlan is closed separately. */\n ExecShutdownGather(node);\n ExecFreeExprContext(&node->ps);\n\nBut I don't think there's a problem, because what ExecShutdownGather()\ndoes seems entirely independent of cleanup of outerPlan.\n\nAs for the performance impact of initializing the list of initialized\nnodes to use during the cleanup phase, I couldn't find a regression,\nnor any improvement by replacing the tree walk by linear scan of a\nlist. Actually, ExecEndNode() is pretty far down in the perf profile\nanyway, so the performance difference caused by the patch hardly\nmatters. See the following contrived example:\n\ncreate table f();\nanalyze f;\nexplain (costs off) select count(*) from f f1, f f2, f f3, f f4, f f5,\nf f6, f f7, f f8, f f9, f f10;\n QUERY PLAN\n------------------------------------------------------------------------------\n Aggregate\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Seq Scan on f f1\n -> Seq Scan on f f2\n -> Seq Scan on f f3\n -> Seq Scan on f f4\n -> Seq Scan on f f5\n -> Seq Scan on f f6\n -> Seq Scan on f f7\n -> Seq Scan on f f8\n -> Seq Scan on f f9\n -> Seq Scan on f f10\n(20 rows)\n\ndo $$\nbegin\nfor i in 1..100000 loop\nperform count(*) from f f1, f f2, f f3, f f4, f f5, f f6, f f7, f f8,\nf f9, f f10;\nend loop;\nend; $$;\n\nTimes for the DO:\n\nUnpatched:\nTime: 756.353 ms\nTime: 745.752 ms\nTime: 749.184 ms\n\nPatched:\nTime: 737.717 ms\nTime: 747.815 ms\nTime: 753.456 ms\n\nI've attached the new refactoring patch as 0001.\n\nAnother change I've made in the main patch is to change the API of\nExecutorStart() (and ExecutorStart_hook) more explicitly to return a\nboolean indicating whether or not the plan initialization was\nsuccessful. That way seems better than making the callers figure that\nout by seeing that QueryDesc.planstate is NULL and/or checking\nQueryDesc.plan_valid. Correspondingly, PortalStart() now also returns\ntrue or false matching what ExecutorStart() returned. I suppose this\nbetter alerts any extensions that use the ExecutorStart_hook to fix\ntheir code to do the right thing.\n\nHaving extracted the ExecEndNode() change, I'm also starting to feel\ninclined to extract a couple of other bits from the main patch as\nseparate patches, such as moving the ExecutorStart() call from\nPortalRun() to PortalStart() for the multi-query portals. I'll do\nthat in the next version.", "msg_date": "Wed, 2 Aug 2023 22:39:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Aug 2, 2023 at 10:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Having extracted the ExecEndNode() change, I'm also starting to feel\n> inclined to extract a couple of other bits from the main patch as\n> separate patches, such as moving the ExecutorStart() call from\n> PortalRun() to PortalStart() for the multi-query portals. I'll do\n> that in the next version.\n\nHere's a patch set where the refactoring to move the ExecutorStart()\ncalls to be closer to GetCachedPlan() (for the call sites that use a\nCachedPlan) is extracted into a separate patch, 0002. Its commit\nmessage notes an aspect of this refactoring that I feel a bit nervous\nabout -- needing to also move the CommandCounterIncrement() call from\nthe loop in PortalRunMulti() to PortalStart() which now does\nExecutorStart() for the PORTAL_MULTI_QUERY case.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Aug 2023 17:37:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Aug 3, 2023 at 4:37 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Here's a patch set where the refactoring to move the ExecutorStart()\n> calls to be closer to GetCachedPlan() (for the call sites that use a\n> CachedPlan) is extracted into a separate patch, 0002. Its commit\n> message notes an aspect of this refactoring that I feel a bit nervous\n> about -- needing to also move the CommandCounterIncrement() call from\n> the loop in PortalRunMulti() to PortalStart() which now does\n> ExecutorStart() for the PORTAL_MULTI_QUERY case.\n\nI spent some time today reviewing 0001. Here are a few thoughts and\nnotes about things that I looked at.\n\nFirst, I wondered whether it was really adequate for ExecEndPlan() to\njust loop over estate->es_plan_nodes and call it good. Put\ndifferently, is it possible that we could ever have more than one\nrelevant EState, say for a subplan or an EPQ execution or something,\nso that this loop wouldn't cover everything? I found nothing to make\nme think that this is a real danger.\n\nSecond, I wondered whether the ordering of cleanup operations could be\nan issue. Right now, a node can position cleanup code before, after,\nor both before and after recursing to child nodes, whereas with this\ndesign change, the cleanup code will always be run before recursing to\nchild nodes. Here, I think we have problems. Both ExecGather and\nExecEndGatherMerge intentionally clean up the children before the\nparent, so that the child shutdown happens before\nExecParallelCleanup(). Based on the comment and commit\nacf555bc53acb589b5a2827e65d655fa8c9adee0, this appears to be\nintentional, and you can sort of see why from looking at the stuff\nthat happens in ExecParallelCleanup(). If the instrumentation data\nvanishes before the child nodes have a chance to clean things up,\nmaybe EXPLAIN ANALYZE won't reflect that instrumentation any more. If\nthe DSA vanishes, maybe we'll crash if we try to access it. If we\nactually reach DestroyParallelContext(), we're just going to start\nkilling the workers. None of that sounds like what we want.\n\nThe good news, of a sort, is that I think this might be the only case\nof this sort of problem. Most nodes recurse at the end, after doing\nall the cleanup, so the behavior won't change. Moreover, even if it\ndid, most cleanup operations look pretty localized -- they affect only\nthe node itself, and not its children. A somewhat interesting case is\nnodes associated with subplans. Right now, because of the coding of\nExecEndPlan, nodes associated with subplans are all cleaned up at the\nvery end, after everything that's not inside of a subplan. But with\nthis change, they'd get cleaned up in the order of initialization,\nwhich actually seems more natural, as long as it doesn't break\nanything, which I think it probably won't, since as I mention in most\ncases node cleanup looks quite localized, i.e. it doesn't care whether\nit happens before or after the cleanup of other nodes.\n\nI think something will have to be done about the parallel query stuff,\nthough. I'm not sure exactly what. It is a little weird that Gather\nand Gather Merge treat starting and killing workers as a purely\n\"private matter\" that they can decide to handle without the executor\noverall being very much aware of it. So maybe there's a way that some\nof the cleanup logic here could be hoisted up into the general\nexecutor machinery, that is, first end all the nodes, and then go\nback, and end all the parallelism using, maybe, another list inside of\nthe estate. However, I think that the existence of ExecShutdownNode()\nis a complication here -- we need to make sure that we don't break\neither the case where that happen before overall plan shutdown, or the\ncase where it doesn't.\n\nThird, a couple of minor comments on details of how you actually made\nthese changes in the patch set. Personally, I would remove all of the\n\"is closed separately\" comments that you added. I think it's a\nviolation of the general coding principle that you should make the\ncode look like it's always been that way. Sure, in the immediate\nfuture, people might wonder why you don't need to recurse, but 5 or 10\nyears from now that's just going to be clutter. Second, in the cases\nwhere the ExecEndNode functions end up completely empty, I would\nsuggest just removing the functions entirely and making the switch\nthat dispatches on the node type have a switch case that lists all the\nnodes that don't need a callback here and say /* Nothing do for these\nnode types */ break;. This will save a few CPU cycles and I think it\nwill be easier to read as well.\n\nFourth, I wonder whether we really need this patch at all. I initially\nthought we did, because if we abandon the initialization of a plan\npartway through, then we end up with a plan that is in a state that\npreviously would never have occurred, and we still have to be able to\nclean it up. However, perhaps it's a difference without a distinction.\nSay we have a partial plan tree, where not all of the PlanState nodes\never got created. We then just call the existing version of\nExecEndPlan() on it, with no changes. What goes wrong? Sure, we might\ncall ExecEndNode() on some null pointers where in the current world\nthere would always be valid pointers, but ExecEndNode() will handle\nthat just fine, by doing nothing for those nodes, because it starts\nwith a NULL-check.\n\nAnother alternative design might be to switch ExecEndNode to use\nplanstate_tree_walker to walk the node tree, removing the walk from\nthe node-type-specific functions as in this patch, and deleting the\nend-node functions that are no longer required altogether, as proposed\nabove. I somehow feel that this would be cleaner than the status quo,\nbut here again, I'm not sure we really need it. planstate_tree_walker\nwould just pass over any NULL pointers that it found without doing\nanything, but the current code does that too, so while this might be\nmore beautiful than what we have now, I'm not sure that there's any\nreal reason to do it. The fact that, like the current patch, it would\nchange the order in which nodes are cleaned up is also an issue -- the\nGather/Gather Merge ordering issues might be easier to handle this way\nwith some hack in ExecEndNode() than they are with the design you have\nnow, but we'd still have to do something about them, I believe.\n\nSorry if this is a bit of a meandering review, but those are my thoughts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 11:36:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Second, I wondered whether the ordering of cleanup operations could be\n> an issue. Right now, a node can position cleanup code before, after,\n> or both before and after recursing to child nodes, whereas with this\n> design change, the cleanup code will always be run before recursing to\n> child nodes. Here, I think we have problems. Both ExecGather and\n> ExecEndGatherMerge intentionally clean up the children before the\n> parent, so that the child shutdown happens before\n> ExecParallelCleanup(). Based on the comment and commit\n> acf555bc53acb589b5a2827e65d655fa8c9adee0, this appears to be\n> intentional, and you can sort of see why from looking at the stuff\n> that happens in ExecParallelCleanup().\n\nRight, I doubt that changing that is going to work out well.\nHash joins might have issues with it too.\n\nCould it work to make the patch force child cleanup before parent,\ninstead of after? Or would that break other places?\n\nOn the whole though I think it's probably a good idea to leave\nparent nodes in control of the timing, so I kind of side with\nyour later comment about whether we want to change this at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Aug 2023 11:44:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Aug 7, 2023 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right, I doubt that changing that is going to work out well.\n> Hash joins might have issues with it too.\n\nI thought about the case, because Hash and Hash Join are such closely\nintertwined nodes, but I don't see any problem there. It doesn't\nreally look like it would matter in what order things got cleaned up.\nUnless I'm missing something, all of the data structures are just\nindependent things that we have to get rid of sometime.\n\n> Could it work to make the patch force child cleanup before parent,\n> instead of after? Or would that break other places?\n\nTo me, it seems like the overwhelming majority of the code simply\ndoesn't care. You could pick an order out of a hat and it would be\n100% OK. But I haven't gone and looked through it with this specific\nidea in mind.\n\n> On the whole though I think it's probably a good idea to leave\n> parent nodes in control of the timing, so I kind of side with\n> your later comment about whether we want to change this at all.\n\nMy overall feeling here is that what Gather and Gather Merge is doing\nis pretty weird. I think I kind of knew that at the time this was all\ngetting implemented and reviewed, but I wasn't keen to introduce more\ninfrastructure changes than necessary given that parallel query, as a\nproject, was still pretty new and I didn't want to give other hackers\nmore reasons to be unhappy with what was already a lot of very\nwide-ranging change to the system. A good number of years having gone\nby now, and other people having worked on that code some more, I'm not\ntoo worried about someone calling for a wholesale revert of parallel\nquery. However, there's a second problem here as well, which is that\nI'm still not sure what the right thing to do is. We've fiddled around\nwith the shutdown sequence for parallel query a number of times now,\nand I think there's still stuff that doesn't work quite right,\nespecially around getting all of the instrumentation data back to the\nleader. I haven't spent enough time on this recently enough to be sure\nwhat if any problems remain, though.\n\nSo on the one hand, I don't really like the fact that we have an\nad-hoc recursion arrangement here, instead of using\nplanstate_tree_walker or, as Amit proposes, a List. Giving subordinate\nnodes control over the ordering when they don't really need it just\nmeans we have more code with more possibility for bugs and less\ncertainty about whether the theoretical flexibility is doing anything\nin practice. But on the other hand, because we know that at least for\nthe Gather/GatherMerge case it seems like it probably matters\nsomewhat, it definitely seems appealing not to change anything as part\nof this patch set that we don't really have to.\n\nI've had it firmly in my mind here that we were going to need to\nchange something somehow -- I mean, the possibility of returning in\nthe middle of node initialization seems like a pretty major change to\nthe way this stuff works, and it seems hard for me to believe that we\ncan just do that and not have to adjust any code anywhere else. Can it\nreally be true that we can do that and yet not end up creating any\nstates anywhere with which the current cleanup code is unprepared to\ncope? Maybe, but it would seem like rather good luck if that's how it\nshakes out. Still, at the moment, I'm having a hard time understanding\nwhat this particular change buys us.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 12:25:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 8, 2023 at 12:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Aug 3, 2023 at 4:37 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Here's a patch set where the refactoring to move the ExecutorStart()\n> > calls to be closer to GetCachedPlan() (for the call sites that use a\n> > CachedPlan) is extracted into a separate patch, 0002. Its commit\n> > message notes an aspect of this refactoring that I feel a bit nervous\n> > about -- needing to also move the CommandCounterIncrement() call from\n> > the loop in PortalRunMulti() to PortalStart() which now does\n> > ExecutorStart() for the PORTAL_MULTI_QUERY case.\n>\n> I spent some time today reviewing 0001. Here are a few thoughts and\n> notes about things that I looked at.\n\nThanks for taking a look at this.\n\n> First, I wondered whether it was really adequate for ExecEndPlan() to\n> just loop over estate->es_plan_nodes and call it good. Put\n> differently, is it possible that we could ever have more than one\n> relevant EState, say for a subplan or an EPQ execution or something,\n> so that this loop wouldn't cover everything? I found nothing to make\n> me think that this is a real danger.\n\nCheck.\n\n> Second, I wondered whether the ordering of cleanup operations could be\n> an issue. Right now, a node can position cleanup code before, after,\n> or both before and after recursing to child nodes, whereas with this\n> design change, the cleanup code will always be run before recursing to\n> child nodes.\n\nBecause a node is appended to es_planstate_nodes at the end of\nExecInitNode(), child nodes get added before their parent nodes. So\nthe children are cleaned up first.\n\n> Here, I think we have problems. Both ExecGather and\n> ExecEndGatherMerge intentionally clean up the children before the\n> parent, so that the child shutdown happens before\n> ExecParallelCleanup(). Based on the comment and commit\n> acf555bc53acb589b5a2827e65d655fa8c9adee0, this appears to be\n> intentional, and you can sort of see why from looking at the stuff\n> that happens in ExecParallelCleanup(). If the instrumentation data\n> vanishes before the child nodes have a chance to clean things up,\n> maybe EXPLAIN ANALYZE won't reflect that instrumentation any more. If\n> the DSA vanishes, maybe we'll crash if we try to access it. If we\n> actually reach DestroyParallelContext(), we're just going to start\n> killing the workers. None of that sounds like what we want.\n>\n> The good news, of a sort, is that I think this might be the only case\n> of this sort of problem. Most nodes recurse at the end, after doing\n> all the cleanup, so the behavior won't change. Moreover, even if it\n> did, most cleanup operations look pretty localized -- they affect only\n> the node itself, and not its children. A somewhat interesting case is\n> nodes associated with subplans. Right now, because of the coding of\n> ExecEndPlan, nodes associated with subplans are all cleaned up at the\n> very end, after everything that's not inside of a subplan. But with\n> this change, they'd get cleaned up in the order of initialization,\n> which actually seems more natural, as long as it doesn't break\n> anything, which I think it probably won't, since as I mention in most\n> cases node cleanup looks quite localized, i.e. it doesn't care whether\n> it happens before or after the cleanup of other nodes.\n>\n> I think something will have to be done about the parallel query stuff,\n> though. I'm not sure exactly what. It is a little weird that Gather\n> and Gather Merge treat starting and killing workers as a purely\n> \"private matter\" that they can decide to handle without the executor\n> overall being very much aware of it. So maybe there's a way that some\n> of the cleanup logic here could be hoisted up into the general\n> executor machinery, that is, first end all the nodes, and then go\n> back, and end all the parallelism using, maybe, another list inside of\n> the estate. However, I think that the existence of ExecShutdownNode()\n> is a complication here -- we need to make sure that we don't break\n> either the case where that happen before overall plan shutdown, or the\n> case where it doesn't.\n\nGiven that children are closed before parent, the order of operations\nin ExecEndGather[Merge] is unchanged.\n\n> Third, a couple of minor comments on details of how you actually made\n> these changes in the patch set. Personally, I would remove all of the\n> \"is closed separately\" comments that you added. I think it's a\n> violation of the general coding principle that you should make the\n> code look like it's always been that way. Sure, in the immediate\n> future, people might wonder why you don't need to recurse, but 5 or 10\n> years from now that's just going to be clutter. Second, in the cases\n> where the ExecEndNode functions end up completely empty, I would\n> suggest just removing the functions entirely and making the switch\n> that dispatches on the node type have a switch case that lists all the\n> nodes that don't need a callback here and say /* Nothing do for these\n> node types */ break;. This will save a few CPU cycles and I think it\n> will be easier to read as well.\n\nI agree with both suggestions.\n\n> Fourth, I wonder whether we really need this patch at all. I initially\n> thought we did, because if we abandon the initialization of a plan\n> partway through, then we end up with a plan that is in a state that\n> previously would never have occurred, and we still have to be able to\n> clean it up. However, perhaps it's a difference without a distinction.\n> Say we have a partial plan tree, where not all of the PlanState nodes\n> ever got created. We then just call the existing version of\n> ExecEndPlan() on it, with no changes. What goes wrong? Sure, we might\n> call ExecEndNode() on some null pointers where in the current world\n> there would always be valid pointers, but ExecEndNode() will handle\n> that just fine, by doing nothing for those nodes, because it starts\n> with a NULL-check.\n\nWell, not all cleanup actions for a given node type are a recursive\ncall to ExecEndNode(), some are also things like this:\n\n /*\n * clean out the tuple table\n */\n ExecClearTuple(node->ps.ps_ResultTupleSlot);\n\nBut should ExecInitNode() subroutines return the partially initialized\nPlanState node or NULL on detecting invalidation? If I'm\nunderstanding how you think this should be working correctly, I think\nyou mean the former, because if it were the latter, ExecInitNode()\nwould end up returning NULL at the top for the root and then there's\nnothing to pass to ExecEndNode(), so no way to clean up to begin with.\nIn that case, I think we will need to adjust ExecEndNode() subroutines\nto add `if (node->ps.ps_ResultTupleSlot)` in the above code, for\nexample. That's something Tom had said he doesn't like very much [1].\n\nSome node types such as Append, BitmapAnd, etc. that contain a list of\nsubplans would need some adjustment, such as using palloc0 for\nas_appendplans[], etc. so that uninitialized subplans have NULL in the\narray.\n\nThere are also issues around ForeignScan, CustomScan\nExecEndNode()-time callbacks when they are partially initialized -- is\nit OK to call the *EndScan callback if the *BeginScan one may not have\nbeen called to begin with? Though, perhaps we can adjust the\nExecInitNode() subroutines for those to return NULL by opening the\nrelation and checking for invalidation at the beginning instead of in\nthe middle. That should be done for all Scan or leaf-level node\ntypes.\n\nAnyway, I guess, for the patch's purpose, maybe we should bite the\nbullet and make those adjustments rather than change ExecEndNode() as\nproposed. I can give that another try.\n\n> Another alternative design might be to switch ExecEndNode to use\n> planstate_tree_walker to walk the node tree, removing the walk from\n> the node-type-specific functions as in this patch, and deleting the\n> end-node functions that are no longer required altogether, as proposed\n> above. I somehow feel that this would be cleaner than the status quo,\n> but here again, I'm not sure we really need it. planstate_tree_walker\n> would just pass over any NULL pointers that it found without doing\n> anything, but the current code does that too, so while this might be\n> more beautiful than what we have now, I'm not sure that there's any\n> real reason to do it. The fact that, like the current patch, it would\n> change the order in which nodes are cleaned up is also an issue -- the\n> Gather/Gather Merge ordering issues might be easier to handle this way\n> with some hack in ExecEndNode() than they are with the design you have\n> now, but we'd still have to do something about them, I believe.\n\nIt might be interesting to see if introducing planstate_tree_walker()\nin ExecEndNode() makes it easier to reason about ExecEndNode()\ngenerally speaking, but I think you may be that doing so may not\nreally make matters easier for the partially initialized planstate\ntree case.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Aug 2023 23:32:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 8, 2023 at 10:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> But should ExecInitNode() subroutines return the partially initialized\n> PlanState node or NULL on detecting invalidation? If I'm\n> understanding how you think this should be working correctly, I think\n> you mean the former, because if it were the latter, ExecInitNode()\n> would end up returning NULL at the top for the root and then there's\n> nothing to pass to ExecEndNode(), so no way to clean up to begin with.\n> In that case, I think we will need to adjust ExecEndNode() subroutines\n> to add `if (node->ps.ps_ResultTupleSlot)` in the above code, for\n> example. That's something Tom had said he doesn't like very much [1].\n\nYeah, I understood Tom's goal as being \"don't return partially\ninitialized nodes.\"\n\nPersonally, I'm not sure that's an important goal. In fact, I don't\neven think it's a desirable one. It doesn't look difficult to audit\nthe end-node functions for cases where they'd fail if a particular\npointer were NULL instead of pointing to some real data, and just\nfixing all such cases to have NULL-tests looks like purely mechanical\nwork that we are unlikely to get wrong. And at least some cases\nwouldn't require any changes at all.\n\nIf we don't do that, the complexity doesn't go away. It just moves\nsomeplace else. Presumably what we do in that case is have\nExecInitNode functions undo any initialization that they've already\ndone before returning NULL. There are basically two ways to do that.\nOption one is to add code at the point where they return early to\nclean up anything they've already initialized, but that code is likely\nto substantially duplicate whatever the ExecEndNode function already\nknows how to do, and it's very easy for logic like this to get broken\nif somebody rearranges an ExecInitNode function down the road. Option\ntwo is to rearrange the ExecInitNode functions now, to open relations\nor recurse at the beginning, so that we discover the need to fail\nbefore we initialize anything. That restricts our ability to further\nrearrange the functions in future somewhat, but more importantly,\nIMHO, it introduces more risk right now. Checking that the ExecEndNode\nfunction will not fail if some pointers are randomly null is a lot\neasier than checking that changing the order of operations in an\nExecInitNode function breaks nothing.\n\nI'm not here to say that we can't do one of those things. But I think\nadding null-tests to ExecEndNode functions looks like *far* less work\nand *way* less risk.\n\nThere's a second issue here, too, which is when we abort ExecInitNode\npartway through, how do we signal that? You're rightly pointing out\nhere that if we do that by returning NULL, then we don't do it by\nreturning a pointer to the partially initialized node that we just\ncreated, which means that we either need to store those partially\ninitialized nodes in a separate data structure as you propose to do in\n0001, or else we need to pick a different signalling convention. We\ncould change (a) ExecInitNode to have an additional argument, bool\n*kaboom, or (b) we could make it return bool and return the node\npointer via a new additional argument, or (c) we could put a Boolean\nflag into the estate and let the function signal failure by flipping\nthe value of the flag. If we do any of those things, then as far as I\ncan see 0001 is unnecessary. If we do none of them but also avoid\ncreating partially initialized nodes by one of the two techniques\nmentioned two paragraphs prior, then 0001 is also unnecessary. If we\ndo none of them but do create partially initialized nodes, then we\nneed 0001.\n\nSo if this were a restaurant menu, then it might look like this:\n\nPrix Fixe Menu (choose one from each)\n\nFirst Course - How do we clean up after partial initialization?\n(1) ExecInitNode functions produce partially initialized nodes\n(2) ExecInitNode functions get refactored so that the stuff that can\ncause early exit always happens first, so that no cleanup is ever\nneeded\n(3) ExecInitNode functions do any required cleanup in situ\n\nSecond Course - How do we signal that initialization stopped early?\n(A) Return NULL.\n(B) Add a bool * out-parmeter to ExecInitNode.\n(C) Add a Node * out-parameter to ExecInitNode and change the return\nvalue to bool.\n(D) Add a bool to the EState.\n(E) Something else, maybe.\n\nI think that we need 0001 if we choose specifically (1) and (A). My\ngut feeling is that the least-invasive way to do this project is to\nchoose (1) and (D). My second choice would be (1) and (C), and my\nthird choice would be (1) and (A). If I can't have (1), I think I\nprefer (2) over (3), but I also believe I prefer hiding in a deep hole\nto either of them. Maybe I'm not seeing the whole picture correctly\nhere, but both (2) and (3) look awfully painful to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Aug 2023 12:05:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Aug 9, 2023 at 1:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Aug 8, 2023 at 10:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > But should ExecInitNode() subroutines return the partially initialized\n> > PlanState node or NULL on detecting invalidation? If I'm\n> > understanding how you think this should be working correctly, I think\n> > you mean the former, because if it were the latter, ExecInitNode()\n> > would end up returning NULL at the top for the root and then there's\n> > nothing to pass to ExecEndNode(), so no way to clean up to begin with.\n> > In that case, I think we will need to adjust ExecEndNode() subroutines\n> > to add `if (node->ps.ps_ResultTupleSlot)` in the above code, for\n> > example. That's something Tom had said he doesn't like very much [1].\n>\n> Yeah, I understood Tom's goal as being \"don't return partially\n> initialized nodes.\"\n>\n> Personally, I'm not sure that's an important goal. In fact, I don't\n> even think it's a desirable one. It doesn't look difficult to audit\n> the end-node functions for cases where they'd fail if a particular\n> pointer were NULL instead of pointing to some real data, and just\n> fixing all such cases to have NULL-tests looks like purely mechanical\n> work that we are unlikely to get wrong. And at least some cases\n> wouldn't require any changes at all.\n>\n> If we don't do that, the complexity doesn't go away. It just moves\n> someplace else. Presumably what we do in that case is have\n> ExecInitNode functions undo any initialization that they've already\n> done before returning NULL. There are basically two ways to do that.\n> Option one is to add code at the point where they return early to\n> clean up anything they've already initialized, but that code is likely\n> to substantially duplicate whatever the ExecEndNode function already\n> knows how to do, and it's very easy for logic like this to get broken\n> if somebody rearranges an ExecInitNode function down the road.\n\nYeah, I too am not a fan of making ExecInitNode() clean up partially\ninitialized nodes.\n\n> Option\n> two is to rearrange the ExecInitNode functions now, to open relations\n> or recurse at the beginning, so that we discover the need to fail\n> before we initialize anything. That restricts our ability to further\n> rearrange the functions in future somewhat, but more importantly,\n> IMHO, it introduces more risk right now. Checking that the ExecEndNode\n> function will not fail if some pointers are randomly null is a lot\n> easier than checking that changing the order of operations in an\n> ExecInitNode function breaks nothing.\n>\n> I'm not here to say that we can't do one of those things. But I think\n> adding null-tests to ExecEndNode functions looks like *far* less work\n> and *way* less risk.\n\n+1\n\n> There's a second issue here, too, which is when we abort ExecInitNode\n> partway through, how do we signal that? You're rightly pointing out\n> here that if we do that by returning NULL, then we don't do it by\n> returning a pointer to the partially initialized node that we just\n> created, which means that we either need to store those partially\n> initialized nodes in a separate data structure as you propose to do in\n> 0001,\n>\n> or else we need to pick a different signalling convention. We\n> could change (a) ExecInitNode to have an additional argument, bool\n> *kaboom, or (b) we could make it return bool and return the node\n> pointer via a new additional argument, or (c) we could put a Boolean\n> flag into the estate and let the function signal failure by flipping\n> the value of the flag.\n\nThe failure can already be detected by seeing that\nExecPlanIsValid(estate) is false. The question is what ExecInitNode()\nor any of its subroutines should return once it is. I think the\nfollowing convention works:\n\nReturn partially initialized state from ExecInit* function where we\ndetect the invalidation after calling ExecInitNode() on a child plan,\nso that ExecEndNode() can recurse to clean it up.\n\nReturn NULL from ExecInit* functions where we detect the invalidation\nafter opening and locking a relation but before calling ExecInitNode()\nto initialize a child plan if there's one at all. Even if we may set\nthings like ExprContext, TupleTableSlot fields, they are cleaned up\nindependently of the plan tree anyway via the cleanup called with\nes_exprcontexts, es_tupleTable, respectively. I even noticed bits\nlike this in ExecEnd* functions:\n\n- /*\n- * Free the exprcontext(s) ... now dead code, see ExecFreeExprContext\n- */\n-#ifdef NOT_USED\n- ExecFreeExprContext(&node->ss.ps);\n- if (node->ioss_RuntimeContext)\n- FreeExprContext(node->ioss_RuntimeContext, true);\n-#endif\n\nSo, AFAICS, ExprContext, TupleTableSlot cleanup in ExecNode* functions\nis unnecessary but remain around because nobody cared about and got\naround to getting rid of it.\n\n> If we do any of those things, then as far as I\n> can see 0001 is unnecessary. If we do none of them but also avoid\n> creating partially initialized nodes by one of the two techniques\n> mentioned two paragraphs prior, then 0001 is also unnecessary. If we\n> do none of them but do create partially initialized nodes, then we\n> need 0001.\n>\n> So if this were a restaurant menu, then it might look like this:\n>\n> Prix Fixe Menu (choose one from each)\n>\n> First Course - How do we clean up after partial initialization?\n> (1) ExecInitNode functions produce partially initialized nodes\n> (2) ExecInitNode functions get refactored so that the stuff that can\n> cause early exit always happens first, so that no cleanup is ever\n> needed\n> (3) ExecInitNode functions do any required cleanup in situ\n>\n> Second Course - How do we signal that initialization stopped early?\n> (A) Return NULL.\n> (B) Add a bool * out-parmeter to ExecInitNode.\n> (C) Add a Node * out-parameter to ExecInitNode and change the return\n> value to bool.\n> (D) Add a bool to the EState.\n> (E) Something else, maybe.\n>\n> I think that we need 0001 if we choose specifically (1) and (A). My\n> gut feeling is that the least-invasive way to do this project is to\n> choose (1) and (D). My second choice would be (1) and (C), and my\n> third choice would be (1) and (A). If I can't have (1), I think I\n> prefer (2) over (3), but I also believe I prefer hiding in a deep hole\n> to either of them. Maybe I'm not seeing the whole picture correctly\n> here, but both (2) and (3) look awfully painful to me.\n\nI think what I've ended up with in the attached 0001 (WIP) is both\n(1), (2), and (D). As mentioned above, (D) is implemented with the\nExecPlanStillValid() function.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Aug 2023 14:31:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Aug 11, 2023 at 14:31 Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Wed, Aug 9, 2023 at 1:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Aug 8, 2023 at 10:32 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > > But should ExecInitNode() subroutines return the partially initialized\n> > > PlanState node or NULL on detecting invalidation? If I'm\n> > > understanding how you think this should be working correctly, I think\n> > > you mean the former, because if it were the latter, ExecInitNode()\n> > > would end up returning NULL at the top for the root and then there's\n> > > nothing to pass to ExecEndNode(), so no way to clean up to begin with.\n> > > In that case, I think we will need to adjust ExecEndNode() subroutines\n> > > to add `if (node->ps.ps_ResultTupleSlot)` in the above code, for\n> > > example. That's something Tom had said he doesn't like very much [1].\n> >\n> > Yeah, I understood Tom's goal as being \"don't return partially\n> > initialized nodes.\"\n> >\n> > Personally, I'm not sure that's an important goal. In fact, I don't\n> > even think it's a desirable one. It doesn't look difficult to audit\n> > the end-node functions for cases where they'd fail if a particular\n> > pointer were NULL instead of pointing to some real data, and just\n> > fixing all such cases to have NULL-tests looks like purely mechanical\n> > work that we are unlikely to get wrong. And at least some cases\n> > wouldn't require any changes at all.\n> >\n> > If we don't do that, the complexity doesn't go away. It just moves\n> > someplace else. Presumably what we do in that case is have\n> > ExecInitNode functions undo any initialization that they've already\n> > done before returning NULL. There are basically two ways to do that.\n> > Option one is to add code at the point where they return early to\n> > clean up anything they've already initialized, but that code is likely\n> > to substantially duplicate whatever the ExecEndNode function already\n> > knows how to do, and it's very easy for logic like this to get broken\n> > if somebody rearranges an ExecInitNode function down the road.\n>\n> Yeah, I too am not a fan of making ExecInitNode() clean up partially\n> initialized nodes.\n>\n> > Option\n> > two is to rearrange the ExecInitNode functions now, to open relations\n> > or recurse at the beginning, so that we discover the need to fail\n> > before we initialize anything. That restricts our ability to further\n> > rearrange the functions in future somewhat, but more importantly,\n> > IMHO, it introduces more risk right now. Checking that the ExecEndNode\n> > function will not fail if some pointers are randomly null is a lot\n> > easier than checking that changing the order of operations in an\n> > ExecInitNode function breaks nothing.\n> >\n> > I'm not here to say that we can't do one of those things. But I think\n> > adding null-tests to ExecEndNode functions looks like *far* less work\n> > and *way* less risk.\n>\n> +1\n>\n> > There's a second issue here, too, which is when we abort ExecInitNode\n> > partway through, how do we signal that? You're rightly pointing out\n> > here that if we do that by returning NULL, then we don't do it by\n> > returning a pointer to the partially initialized node that we just\n> > created, which means that we either need to store those partially\n> > initialized nodes in a separate data structure as you propose to do in\n> > 0001,\n> >\n> > or else we need to pick a different signalling convention. We\n> > could change (a) ExecInitNode to have an additional argument, bool\n> > *kaboom, or (b) we could make it return bool and return the node\n> > pointer via a new additional argument, or (c) we could put a Boolean\n> > flag into the estate and let the function signal failure by flipping\n> > the value of the flag.\n>\n> The failure can already be detected by seeing that\n> ExecPlanIsValid(estate) is false. The question is what ExecInitNode()\n> or any of its subroutines should return once it is. I think the\n> following convention works:\n>\n> Return partially initialized state from ExecInit* function where we\n> detect the invalidation after calling ExecInitNode() on a child plan,\n> so that ExecEndNode() can recurse to clean it up.\n>\n> Return NULL from ExecInit* functions where we detect the invalidation\n> after opening and locking a relation but before calling ExecInitNode()\n> to initialize a child plan if there's one at all. Even if we may set\n> things like ExprContext, TupleTableSlot fields, they are cleaned up\n> independently of the plan tree anyway via the cleanup called with\n> es_exprcontexts, es_tupleTable, respectively. I even noticed bits\n> like this in ExecEnd* functions:\n>\n> - /*\n> - * Free the exprcontext(s) ... now dead code, see ExecFreeExprContext\n> - */\n> -#ifdef NOT_USED\n> - ExecFreeExprContext(&node->ss.ps);\n> - if (node->ioss_RuntimeContext)\n> - FreeExprContext(node->ioss_RuntimeContext, true);\n> -#endif\n>\n> So, AFAICS, ExprContext, TupleTableSlot cleanup in ExecNode* functions\n> is unnecessary but remain around because nobody cared about and got\n> around to getting rid of it.\n>\n> > If we do any of those things, then as far as I\n> > can see 0001 is unnecessary. If we do none of them but also avoid\n> > creating partially initialized nodes by one of the two techniques\n> > mentioned two paragraphs prior, then 0001 is also unnecessary. If we\n> > do none of them but do create partially initialized nodes, then we\n> > need 0001.\n> >\n> > So if this were a restaurant menu, then it might look like this:\n> >\n> > Prix Fixe Menu (choose one from each)\n> >\n> > First Course - How do we clean up after partial initialization?\n> > (1) ExecInitNode functions produce partially initialized nodes\n> > (2) ExecInitNode functions get refactored so that the stuff that can\n> > cause early exit always happens first, so that no cleanup is ever\n> > needed\n> > (3) ExecInitNode functions do any required cleanup in situ\n> >\n> > Second Course - How do we signal that initialization stopped early?\n> > (A) Return NULL.\n> > (B) Add a bool * out-parmeter to ExecInitNode.\n> > (C) Add a Node * out-parameter to ExecInitNode and change the return\n> > value to bool.\n> > (D) Add a bool to the EState.\n> > (E) Something else, maybe.\n> >\n> > I think that we need 0001 if we choose specifically (1) and (A). My\n> > gut feeling is that the least-invasive way to do this project is to\n> > choose (1) and (D). My second choice would be (1) and (C), and my\n> > third choice would be (1) and (A). If I can't have (1), I think I\n> > prefer (2) over (3), but I also believe I prefer hiding in a deep hole\n> > to either of them. Maybe I'm not seeing the whole picture correctly\n> > here, but both (2) and (3) look awfully painful to me.\n>\n> I think what I've ended up with in the attached 0001 (WIP) is both\n> (1), (2), and (D). As mentioned above, (D) is implemented with the\n> ExecPlanStillValid() function.\n\n\nAfter removing the unnecessary cleanup code from most node types’ ExecEnd*\nfunctions, one thing I’m tempted to do is remove the functions that do\nnothing else but recurse to close the outerPlan, innerPlan child nodes. We\ncould instead have ExecEndNode() itself recurse to close outerPlan,\ninnerPlan child nodes at the top, which preserves the\nclose-child-before-self behavior for Gather* nodes, and close node type\nspecific cleanup functions for nodes that do have any local cleanup to do.\nPerhaps, we could even use planstate_tree_walker() called at the top\ninstead of the usual bottom so that nodes with a list of child subplans\nlike Append also don’t need to have their own ExecEnd* functions.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Fri, Aug 11, 2023 at 14:31 Amit Langote <amitlangote09@gmail.com> wrote:On Wed, Aug 9, 2023 at 1:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Aug 8, 2023 at 10:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > But should ExecInitNode() subroutines return the partially initialized\n> > PlanState node or NULL on detecting invalidation?  If I'm\n> > understanding how you think this should be working correctly, I think\n> > you mean the former, because if it were the latter, ExecInitNode()\n> > would end up returning NULL at the top for the root and then there's\n> > nothing to pass to ExecEndNode(), so no way to clean up to begin with.\n> > In that case, I think we will need to adjust ExecEndNode() subroutines\n> > to add `if (node->ps.ps_ResultTupleSlot)` in the above code, for\n> > example.  That's something Tom had said he doesn't like very much [1].\n>\n> Yeah, I understood Tom's goal as being \"don't return partially\n> initialized nodes.\"\n>\n> Personally, I'm not sure that's an important goal. In fact, I don't\n> even think it's a desirable one. It doesn't look difficult to audit\n> the end-node functions for cases where they'd fail if a particular\n> pointer were NULL instead of pointing to some real data, and just\n> fixing all such cases to have NULL-tests looks like purely mechanical\n> work that we are unlikely to get wrong. And at least some cases\n> wouldn't require any changes at all.\n>\n> If we don't do that, the complexity doesn't go away. It just moves\n> someplace else. Presumably what we do in that case is have\n> ExecInitNode functions undo any initialization that they've already\n> done before returning NULL. There are basically two ways to do that.\n> Option one is to add code at the point where they return early to\n> clean up anything they've already initialized, but that code is likely\n> to substantially duplicate whatever the ExecEndNode function already\n> knows how to do, and it's very easy for logic like this to get broken\n> if somebody rearranges an ExecInitNode function down the road.\n\nYeah, I too am not a fan of making ExecInitNode() clean up partially\ninitialized nodes.\n\n> Option\n> two is to rearrange the ExecInitNode functions now, to open relations\n> or recurse at the beginning, so that we discover the need to fail\n> before we initialize anything. That restricts our ability to further\n> rearrange the functions in future somewhat, but more importantly,\n> IMHO, it introduces more risk right now. Checking that the ExecEndNode\n> function will not fail if some pointers are randomly null is a lot\n> easier than checking that changing the order of operations in an\n> ExecInitNode function breaks nothing.\n>\n> I'm not here to say that we can't do one of those things. But I think\n> adding null-tests to ExecEndNode functions looks like *far* less work\n> and *way* less risk.\n\n+1\n\n> There's a second issue here, too, which is when we abort ExecInitNode\n> partway through, how do we signal that? You're rightly pointing out\n> here that if we do that by returning NULL, then we don't do it by\n> returning a pointer to the partially initialized node that we just\n> created, which means that we either need to store those partially\n> initialized nodes in a separate data structure as you propose to do in\n> 0001,\n>\n> or else we need to pick a different signalling convention. We\n> could change (a) ExecInitNode to have an additional argument, bool\n> *kaboom, or (b) we could make it return bool and return the node\n> pointer via a new additional argument, or (c) we could put a Boolean\n> flag into the estate and let the function signal failure by flipping\n> the value of the flag.\n\nThe failure can already be detected by seeing that\nExecPlanIsValid(estate) is false.  The question is what ExecInitNode()\nor any of its subroutines should return once it is.  I think the\nfollowing convention works:\n\nReturn partially initialized state from ExecInit* function where we\ndetect the invalidation after calling ExecInitNode() on a child plan,\nso that ExecEndNode() can recurse to clean it up.\n\nReturn NULL from ExecInit* functions where we detect the invalidation\nafter opening and locking a relation but before calling ExecInitNode()\nto initialize a child plan if there's one at all.  Even if we may set\nthings like ExprContext, TupleTableSlot fields, they are cleaned up\nindependently of the plan tree anyway via the cleanup called with\nes_exprcontexts, es_tupleTable, respectively.  I even noticed bits\nlike this in ExecEnd* functions:\n\n-   /*\n-    * Free the exprcontext(s) ... now dead code, see ExecFreeExprContext\n-    */\n-#ifdef NOT_USED\n-   ExecFreeExprContext(&node->ss.ps);\n-   if (node->ioss_RuntimeContext)\n-       FreeExprContext(node->ioss_RuntimeContext, true);\n-#endif\n\nSo, AFAICS, ExprContext, TupleTableSlot cleanup in ExecNode* functions\nis unnecessary but remain around because nobody cared about and got\naround to getting rid of it.\n\n> If we do any of those things, then as far as I\n> can see 0001 is unnecessary. If we do none of them but also avoid\n> creating partially initialized nodes by one of the two techniques\n> mentioned two paragraphs prior, then 0001 is also unnecessary. If we\n> do none of them but do create partially initialized nodes, then we\n> need 0001.\n>\n> So if this were a restaurant menu, then it might look like this:\n>\n> Prix Fixe Menu (choose one from each)\n>\n> First Course - How do we clean up after partial initialization?\n> (1) ExecInitNode functions produce partially initialized nodes\n> (2) ExecInitNode functions get refactored so that the stuff that can\n> cause early exit always happens first, so that no cleanup is ever\n> needed\n> (3) ExecInitNode functions do any required cleanup in situ\n>\n> Second Course - How do we signal that initialization stopped early?\n> (A) Return NULL.\n> (B) Add a bool * out-parmeter to ExecInitNode.\n> (C) Add a Node * out-parameter to ExecInitNode and change the return\n> value to bool.\n> (D) Add a bool to the EState.\n> (E) Something else, maybe.\n>\n> I think that we need 0001 if we choose specifically (1) and (A). My\n> gut feeling is that the least-invasive way to do this project is to\n> choose (1) and (D). My second choice would be (1) and (C), and my\n> third choice would be (1) and (A). If I can't have (1), I think I\n> prefer (2) over (3), but I also believe I prefer hiding in a deep hole\n> to either of them. Maybe I'm not seeing the whole picture correctly\n> here, but both (2) and (3) look awfully painful to me.\n\nI think what I've ended up with in the attached 0001 (WIP) is both\n(1), (2), and (D).  As mentioned above, (D) is implemented with the\nExecPlanStillValid() function.After removing the unnecessary cleanup code from most node types’ ExecEnd* functions, one thing I’m tempted to do is remove the functions that do nothing else but recurse to close the outerPlan, innerPlan child nodes.  We could instead have ExecEndNode() itself recurse to close outerPlan, innerPlan child nodes at the top, which preserves the close-child-before-self behavior for Gather* nodes, and close node type specific cleanup functions for nodes that do have any local cleanup to do.  Perhaps, we could even use planstate_tree_walker() called at the top instead of the usual bottom so that nodes with a list of child subplans like Append also don’t need to have their own ExecEnd* functions.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Aug 2023 22:50:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Aug 11, 2023 at 9:50 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> After removing the unnecessary cleanup code from most node types’ ExecEnd* functions, one thing I’m tempted to do is remove the functions that do nothing else but recurse to close the outerPlan, innerPlan child nodes. We could instead have ExecEndNode() itself recurse to close outerPlan, innerPlan child nodes at the top, which preserves the close-child-before-self behavior for Gather* nodes, and close node type specific cleanup functions for nodes that do have any local cleanup to do. Perhaps, we could even use planstate_tree_walker() called at the top instead of the usual bottom so that nodes with a list of child subplans like Append also don’t need to have their own ExecEnd* functions.\n\nI think 0001 needs to be split up. Like, this is code cleanup:\n\n- /*\n- * Free the exprcontext\n- */\n- ExecFreeExprContext(&node->ss.ps);\n\nThis is providing for NULL pointers where we don't currently:\n\n- list_free_deep(aggstate->hash_batches);\n+ if (aggstate->hash_batches)\n+ list_free_deep(aggstate->hash_batches);\n\nAnd this is the early return mechanism per se:\n\n+ if (!ExecPlanStillValid(estate))\n+ return aggstate;\n\nI think at least those 3 kinds of changes deserve to be in separate\npatches with separate commit messages explaining the rationale behind\neach e.g. \"Remove unnecessary cleanup calls in ExecEnd* functions.\nThese calls are no longer required, because <reasons>. Removing them\nsaves a few CPU cycles and simplifies planned refactoring, so do\nthat.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Aug 2023 09:43:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Thanks for taking a look.\n\nOn Mon, Aug 28, 2023 at 10:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 11, 2023 at 9:50 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > After removing the unnecessary cleanup code from most node types’ ExecEnd* functions, one thing I’m tempted to do is remove the functions that do nothing else but recurse to close the outerPlan, innerPlan child nodes. We could instead have ExecEndNode() itself recurse to close outerPlan, innerPlan child nodes at the top, which preserves the close-child-before-self behavior for Gather* nodes, and close node type specific cleanup functions for nodes that do have any local cleanup to do. Perhaps, we could even use planstate_tree_walker() called at the top instead of the usual bottom so that nodes with a list of child subplans like Append also don’t need to have their own ExecEnd* functions.\n>\n> I think 0001 needs to be split up. Like, this is code cleanup:\n>\n> - /*\n> - * Free the exprcontext\n> - */\n> - ExecFreeExprContext(&node->ss.ps);\n>\n> This is providing for NULL pointers where we don't currently:\n>\n> - list_free_deep(aggstate->hash_batches);\n> + if (aggstate->hash_batches)\n> + list_free_deep(aggstate->hash_batches);\n>\n> And this is the early return mechanism per se:\n>\n> + if (!ExecPlanStillValid(estate))\n> + return aggstate;\n>\n> I think at least those 3 kinds of changes deserve to be in separate\n> patches with separate commit messages explaining the rationale behind\n> each e.g. \"Remove unnecessary cleanup calls in ExecEnd* functions.\n> These calls are no longer required, because <reasons>. Removing them\n> saves a few CPU cycles and simplifies planned refactoring, so do\n> that.\"\n\nBreaking up the patch as you describe makes sense, so I've done that:\n\nAttached 0001 removes unnecessary cleanup calls from ExecEnd*() routines.\n\n0002 adds NULLness checks in ExecEnd*() routines on some pointers that\nmay not be initialized by the corresponding ExecInit*() routines in\nthe case where it returns early.\n\n0003 adds the early return mechanism based on checking CachedPlan\ninvalidation, though no CachedPlan is actually passed to the executor\nyet, so no functional changes here yet.\n\nOther patches are rebased over these. One significant change is in\n0004 which does the refactoring to make the callers of ExecutorStart()\naware that it may now return with a partially initialized planstate\ntree that should not be executed. I added a new flag\nEState.es_canceled to denote that state of the execution to complement\nthe existing es_finished. I also needed to add\nAfterTriggerCancelQuery() to ensure that we don't attempt to fire a\ncanceled query's triggers. Most of these changes are needed only to\nappease the various Asserts in these parts of the code and I thought\nthey are warranted given the introduction of a new state of query\nexecution.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 5 Sep 2023 16:13:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Sep 5, 2023 at 3:13 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached 0001 removes unnecessary cleanup calls from ExecEnd*() routines.\n\nIt also adds a few random Assert()s to verify that unrelated pointers\nare not NULL. I suggest that it shouldn't do that.\n\nThe commit message doesn't mention the removal of the calls to\nExecDropSingleTupleTableSlot. It's not clear to me why that's OK and I\nthink it would be nice to mention it in the commit message, assuming\nthat it is in fact OK.\n\nI suggest changing the subject line of the commit to something like\n\"Remove obsolete executor cleanup code.\"\n\n> 0002 adds NULLness checks in ExecEnd*() routines on some pointers that\n> may not be initialized by the corresponding ExecInit*() routines in\n> the case where it returns early.\n\nI think you should only add these where it's needed. For example, I\nthink list_free_deep(NIL) is fine.\n\nThe changes to ExecEndForeignScan look like they include stuff that\nbelongs in 0001.\n\nPersonally, I prefer explicit NULL-tests i.e. if (x != NULL) to\nimplicit ones like if (x), but opinions vary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Sep 2023 10:41:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Sep 5, 2023 at 11:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Sep 5, 2023 at 3:13 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached 0001 removes unnecessary cleanup calls from ExecEnd*() routines.\n>\n> It also adds a few random Assert()s to verify that unrelated pointers\n> are not NULL. I suggest that it shouldn't do that.\n\nOK, removed.\n\n> The commit message doesn't mention the removal of the calls to\n> ExecDropSingleTupleTableSlot. It's not clear to me why that's OK and I\n> think it would be nice to mention it in the commit message, assuming\n> that it is in fact OK.\n\nThat is not OK, so I dropped their removal. I think I confused them\nwith slots in other functions initialized with\nExecInitExtraTupleSlot() that *are* put into the estate.\n\n> I suggest changing the subject line of the commit to something like\n> \"Remove obsolete executor cleanup code.\"\n\nSure.\n\n> > 0002 adds NULLness checks in ExecEnd*() routines on some pointers that\n> > may not be initialized by the corresponding ExecInit*() routines in\n> > the case where it returns early.\n>\n> I think you should only add these where it's needed. For example, I\n> think list_free_deep(NIL) is fine.\n\nOK, done.\n\n> The changes to ExecEndForeignScan look like they include stuff that\n> belongs in 0001.\n\nOops, yes. Moved to 0001.\n\n> Personally, I prefer explicit NULL-tests i.e. if (x != NULL) to\n> implicit ones like if (x), but opinions vary.\n\nI agree, so changed all the new tests to use (x != NULL) form.\nTypically, I try to stick with whatever style is used in the nearby\ncode, though I can see both styles being used in the ExecEnd*()\nroutines. I opted to use the style that we both happen to prefer.\n\nAttached updated patches. Thanks for the review.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Sep 2023 18:12:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Sep 6, 2023 at 5:12 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached updated patches. Thanks for the review.\n\nI think 0001 looks ready to commit. I'm not sure that the commit\nmessage needs to mention future patches here, since this code cleanup\nseems like a good idea regardless, but if you feel otherwise, fair\nenough.\n\nOn 0002, some questions:\n\n- In ExecEndLockRows, is the call to EvalPlanQualEnd a concern? i.e.\nDoes that function need any adjustment?\n- In ExecEndMemoize, should there be a null-test around\nMemoryContextDelete(node->tableContext) as we have in\nExecEndRecursiveUnion, ExecEndSetOp, etc.?\n\nI wonder how we feel about setting pointers to NULL after freeing the\nassociated data structures. The existing code isn't consistent about\ndoing that, and making it do so would be a fairly large change that\nwould bloat this patch quite a bit. On the other hand, I think it's a\ngood practice as a general matter, and we do do it in some ExecEnd\nfunctions.\n\nOn 0003, I have some doubt about whether we really have all the right\ndesign decisions in detail here:\n\n- Why have this weird rule where sometimes we return NULL and other\ntimes the planstate? Is there any point to such a coding rule? Why not\njust always return the planstate?\n\n- Is there any point to all of these early exit cases? For example, in\nExecInitBitmapAnd, why exit early if initialization fails? Why not\njust plunge ahead and if initialization failed the caller will notice\nthat and when we ExecEndNode some of the child node pointers will be\nNULL but who cares? The obvious disadvantage of this approach is that\nwe're doing a bunch of unnecessary initialization, but we're also\nspeeding up the common case where we don't need to abort by avoiding a\nbranch that will rarely be taken. I'm not quite sure what the right\nthing to do is here.\n\n- The cases where we call ExecGetRangeTableRelation or\nExecOpenScanRelation are a bit subtler ... maybe initialization that\nwe're going to do later is going to barf if the tuple descriptor of\nthe relation isn't what we thought it was going to be. In that case it\nbecomes important to exit early. But if that's not actually a problem,\nthen we could apply the same principle here also -- don't pollute the\ncode with early-exit cases, just let it do its thing and sort it out\nlater. Do you know what the actual problems would be here if we didn't\nexit early in these cases?\n\n- Depending on the answers to the above points, one thing we could\nthink of doing is put an early exit case into ExecInitNode itself: if\n(unlikely(!ExecPlanStillValid(whatever)) return NULL. Maybe Andres or\nsomeone is going to argue that that checks too often and is thus too\nexpensive, but it would be a lot more maintainable than having similar\nchecks strewn throughout the ExecInit* functions. Perhaps it deserves\nsome thought/benchmarking. More generally, if there's anything we can\ndo to centralize these checks in fewer places, I think that would be\nworth considering. The patch isn't terribly large as it stands, so I\ndon't necessarily think that this is a critical issue, but I'm just\nwondering if we can do better. I'm not even sure that it would be too\nexpensive to just initialize the whole plan always, and then just do\none test at the end. That's not OK if the changed tuple descriptor (or\nsomething else) is going to crash or error out in a funny way or\nsomething before initialization is completed, but if it's just going\nto result in burning a few CPU cycles in a corner case, I don't know\nif we should really care.\n\n- The \"At this point\" comments don't give any rationale for why we\nshouldn't have received any such invalidation messages. That makes\nthem fairly useless; the Assert by itself clarifies that you think\nthat case shouldn't happen. The comment's job is to justify that\nclaim.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Sep 2023 10:20:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Sep 6, 2023 at 11:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Sep 6, 2023 at 5:12 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached updated patches. Thanks for the review.\n>\n> I think 0001 looks ready to commit. I'm not sure that the commit\n> message needs to mention future patches here, since this code cleanup\n> seems like a good idea regardless, but if you feel otherwise, fair\n> enough.\n\nOK, I will remove the mention of future patches.\n\n> On 0002, some questions:\n>\n> - In ExecEndLockRows, is the call to EvalPlanQualEnd a concern? i.e.\n> Does that function need any adjustment?\n\nI think it does with the patch as it stands. It needs to have an\nearly exit at the top if parentestate is NULL, which it would be if\nEvalPlanQualInit() wasn't called from an ExecInit*() function.\n\nThough, as I answer below your question as to whether there is\nactually any need to interrupt all of the ExecInit*() routines,\nnothing needs to change in ExecEndLockRows().\n\n> - In ExecEndMemoize, should there be a null-test around\n> MemoryContextDelete(node->tableContext) as we have in\n> ExecEndRecursiveUnion, ExecEndSetOp, etc.?\n\nOops, you're right. Added.\n\n> I wonder how we feel about setting pointers to NULL after freeing the\n> associated data structures. The existing code isn't consistent about\n> doing that, and making it do so would be a fairly large change that\n> would bloat this patch quite a bit. On the other hand, I think it's a\n> good practice as a general matter, and we do do it in some ExecEnd\n> functions.\n\nI agree that it might be worthwhile to take the opportunity and make\nthe code more consistent in this regard. So, I've included those\nchanges too in 0002.\n\n> On 0003, I have some doubt about whether we really have all the right\n> design decisions in detail here:\n>\n> - Why have this weird rule where sometimes we return NULL and other\n> times the planstate? Is there any point to such a coding rule? Why not\n> just always return the planstate?\n>\n> - Is there any point to all of these early exit cases? For example, in\n> ExecInitBitmapAnd, why exit early if initialization fails? Why not\n> just plunge ahead and if initialization failed the caller will notice\n> that and when we ExecEndNode some of the child node pointers will be\n> NULL but who cares? The obvious disadvantage of this approach is that\n> we're doing a bunch of unnecessary initialization, but we're also\n> speeding up the common case where we don't need to abort by avoiding a\n> branch that will rarely be taken. I'm not quite sure what the right\n> thing to do is here.\n>\n> - The cases where we call ExecGetRangeTableRelation or\n> ExecOpenScanRelation are a bit subtler ... maybe initialization that\n> we're going to do later is going to barf if the tuple descriptor of\n> the relation isn't what we thought it was going to be. In that case it\n> becomes important to exit early. But if that's not actually a problem,\n> then we could apply the same principle here also -- don't pollute the\n> code with early-exit cases, just let it do its thing and sort it out\n> later. Do you know what the actual problems would be here if we didn't\n> exit early in these cases?\n>\n> - Depending on the answers to the above points, one thing we could\n> think of doing is put an early exit case into ExecInitNode itself: if\n> (unlikely(!ExecPlanStillValid(whatever)) return NULL. Maybe Andres or\n> someone is going to argue that that checks too often and is thus too\n> expensive, but it would be a lot more maintainable than having similar\n> checks strewn throughout the ExecInit* functions. Perhaps it deserves\n> some thought/benchmarking. More generally, if there's anything we can\n> do to centralize these checks in fewer places, I think that would be\n> worth considering. The patch isn't terribly large as it stands, so I\n> don't necessarily think that this is a critical issue, but I'm just\n> wondering if we can do better. I'm not even sure that it would be too\n> expensive to just initialize the whole plan always, and then just do\n> one test at the end. That's not OK if the changed tuple descriptor (or\n> something else) is going to crash or error out in a funny way or\n> something before initialization is completed, but if it's just going\n> to result in burning a few CPU cycles in a corner case, I don't know\n> if we should really care.\n\nI thought about this some and figured that adding the\nis-CachedPlan-still-valid tests in the following places should suffice\nafter all:\n\n1. In InitPlan() right after the top-level ExecInitNode() calls\n2. In ExecInit*() functions of Scan nodes, right after\nExecOpenScanRelation() calls\n\nCachedPlans can only become invalid because of concurrent changes to\nthe inheritance child tables referenced in the plan. Only the\nfollowing schema modifications of child tables are possible to be\nperformed concurrently:\n\n* Addition of a column (allowed only if traditional inheritance child)\n* Addition of an index\n* Addition of a non-index constraint\n* Dropping of a child table (allowed only if traditional inheritance child)\n* Dropping of an index referenced in the plan\n\nThe first 3 are not destructive enough to cause crashes, weird errors\nduring ExecInit*(), though the last two can be, so the 2nd set of the\ntests after ExecOpenScanRelation() mentioned above.\n\n> - The \"At this point\" comments don't give any rationale for why we\n> shouldn't have received any such invalidation messages. That makes\n> them fairly useless; the Assert by itself clarifies that you think\n> that case shouldn't happen. The comment's job is to justify that\n> claim.\n\nI've rewritten the comments.\n\nI'll post the updated set of patches shortly.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Sep 2023 21:57:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Sep 25, 2023 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Sep 6, 2023 at 11:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > - Is there any point to all of these early exit cases? For example, in\n> > ExecInitBitmapAnd, why exit early if initialization fails? Why not\n> > just plunge ahead and if initialization failed the caller will notice\n> > that and when we ExecEndNode some of the child node pointers will be\n> > NULL but who cares? The obvious disadvantage of this approach is that\n> > we're doing a bunch of unnecessary initialization, but we're also\n> > speeding up the common case where we don't need to abort by avoiding a\n> > branch that will rarely be taken. I'm not quite sure what the right\n> > thing to do is here.\n> I thought about this some and figured that adding the\n> is-CachedPlan-still-valid tests in the following places should suffice\n> after all:\n>\n> 1. In InitPlan() right after the top-level ExecInitNode() calls\n> 2. In ExecInit*() functions of Scan nodes, right after\n> ExecOpenScanRelation() calls\n\nAfter sleeping on this, I think we do need the checks after all the\nExecInitNode() calls too, because we have many instances of the code\nlike the following one:\n\n outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);\n tupDesc = ExecGetResultType(outerPlanState(gatherstate));\n <some code that dereferences outDesc>\n\nIf outerNode is a SeqScan and ExecInitSeqScan() returned early because\nExecOpenScanRelation() detected that plan was invalidated, then\ntupDesc would be NULL in this case, causing the code to crash.\n\nNow one might say that perhaps we should only add the\nis-CachedPlan-valid test in the instances where there is an actual\nrisk of such misbehavior, but that could lead to confusion, now or\nlater. It seems better to add them after every ExecInitNode() call\nwhile we're inventing the notion, because doing so relieves the\nauthors of future enhancements of the ExecInit*() routines from\nworrying about any of this.\n\nAttached 0003 should show how that turned out.\n\nUpdated 0002 as mentioned in the previous reply -- setting pointers to\nNULL after freeing them more consistently across various ExecEnd*()\nroutines and using the `if (pointer != NULL)` style over the `if\n(pointer)` more consistently.\n\nUpdated 0001's commit message to remove the mention of its relation to\nany future commits. I intend to push it tomorrow.\n\nPatches 0004 onwards contain changes too, mainly in terms of moving\nthe code around from one patch to another, but I'll omit the details\nof the specific change for now.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 26 Sep 2023 22:06:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Sep 26, 2023 at 10:06 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Sep 25, 2023 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Sep 6, 2023 at 11:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > - Is there any point to all of these early exit cases? For example, in\n> > > ExecInitBitmapAnd, why exit early if initialization fails? Why not\n> > > just plunge ahead and if initialization failed the caller will notice\n> > > that and when we ExecEndNode some of the child node pointers will be\n> > > NULL but who cares? The obvious disadvantage of this approach is that\n> > > we're doing a bunch of unnecessary initialization, but we're also\n> > > speeding up the common case where we don't need to abort by avoiding a\n> > > branch that will rarely be taken. I'm not quite sure what the right\n> > > thing to do is here.\n> > I thought about this some and figured that adding the\n> > is-CachedPlan-still-valid tests in the following places should suffice\n> > after all:\n> >\n> > 1. In InitPlan() right after the top-level ExecInitNode() calls\n> > 2. In ExecInit*() functions of Scan nodes, right after\n> > ExecOpenScanRelation() calls\n>\n> After sleeping on this, I think we do need the checks after all the\n> ExecInitNode() calls too, because we have many instances of the code\n> like the following one:\n>\n> outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);\n> tupDesc = ExecGetResultType(outerPlanState(gatherstate));\n> <some code that dereferences outDesc>\n>\n> If outerNode is a SeqScan and ExecInitSeqScan() returned early because\n> ExecOpenScanRelation() detected that plan was invalidated, then\n> tupDesc would be NULL in this case, causing the code to crash.\n>\n> Now one might say that perhaps we should only add the\n> is-CachedPlan-valid test in the instances where there is an actual\n> risk of such misbehavior, but that could lead to confusion, now or\n> later. It seems better to add them after every ExecInitNode() call\n> while we're inventing the notion, because doing so relieves the\n> authors of future enhancements of the ExecInit*() routines from\n> worrying about any of this.\n>\n> Attached 0003 should show how that turned out.\n>\n> Updated 0002 as mentioned in the previous reply -- setting pointers to\n> NULL after freeing them more consistently across various ExecEnd*()\n> routines and using the `if (pointer != NULL)` style over the `if\n> (pointer)` more consistently.\n>\n> Updated 0001's commit message to remove the mention of its relation to\n> any future commits. I intend to push it tomorrow.\n\nPushed that one. Here are the rebased patches.\n\n0001 seems ready to me, but I'll wait a couple more days for others to\nweigh in. Just to highlight a kind of change that others may have\ndiffering opinions on, consider this hunk from the patch:\n\n- MemoryContextDelete(node->aggcontext);\n+ if (node->aggcontext != NULL)\n+ {\n+ MemoryContextDelete(node->aggcontext);\n+ node->aggcontext = NULL;\n+ }\n...\n+ ExecEndNode(outerPlanState(node));\n+ outerPlanState(node) = NULL;\n\nSo the patch wants to enhance the consistency of setting the pointer\nto NULL after freeing part. Robert mentioned his preference for doing\nit in the patch, which I agree with.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 28 Sep 2023 17:26:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Sep 28, 2023 at 5:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Sep 26, 2023 at 10:06 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > After sleeping on this, I think we do need the checks after all the\n> > ExecInitNode() calls too, because we have many instances of the code\n> > like the following one:\n> >\n> > outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);\n> > tupDesc = ExecGetResultType(outerPlanState(gatherstate));\n> > <some code that dereferences outDesc>\n> >\n> > If outerNode is a SeqScan and ExecInitSeqScan() returned early because\n> > ExecOpenScanRelation() detected that plan was invalidated, then\n> > tupDesc would be NULL in this case, causing the code to crash.\n> >\n> > Now one might say that perhaps we should only add the\n> > is-CachedPlan-valid test in the instances where there is an actual\n> > risk of such misbehavior, but that could lead to confusion, now or\n> > later. It seems better to add them after every ExecInitNode() call\n> > while we're inventing the notion, because doing so relieves the\n> > authors of future enhancements of the ExecInit*() routines from\n> > worrying about any of this.\n> >\n> > Attached 0003 should show how that turned out.\n> >\n> > Updated 0002 as mentioned in the previous reply -- setting pointers to\n> > NULL after freeing them more consistently across various ExecEnd*()\n> > routines and using the `if (pointer != NULL)` style over the `if\n> > (pointer)` more consistently.\n> >\n> > Updated 0001's commit message to remove the mention of its relation to\n> > any future commits. I intend to push it tomorrow.\n>\n> Pushed that one. Here are the rebased patches.\n>\n> 0001 seems ready to me, but I'll wait a couple more days for others to\n> weigh in. Just to highlight a kind of change that others may have\n> differing opinions on, consider this hunk from the patch:\n>\n> - MemoryContextDelete(node->aggcontext);\n> + if (node->aggcontext != NULL)\n> + {\n> + MemoryContextDelete(node->aggcontext);\n> + node->aggcontext = NULL;\n> + }\n> ...\n> + ExecEndNode(outerPlanState(node));\n> + outerPlanState(node) = NULL;\n>\n> So the patch wants to enhance the consistency of setting the pointer\n> to NULL after freeing part. Robert mentioned his preference for doing\n> it in the patch, which I agree with.\n\nRebased.\n\nI haven't been able to reproduce and debug a crash reported by cfbot\nthat I see every now and then:\n\nhttps://cirrus-ci.com/task/5673432591892480?logs=cores#L0\n\n[22:46:12.328] Program terminated with signal SIGSEGV, Segmentation fault.\n[22:46:12.328] Address not mapped to object.\n[22:46:12.838] #0 afterTriggerInvokeEvents\n(events=events@entry=0x836db0460, firing_id=1,\nestate=estate@entry=0x842eec100, delete_ok=<optimized out>) at\n../src/backend/commands/trigger.c:4656\n[22:46:12.838] #1 0x00000000006c67a8 in AfterTriggerEndQuery\n(estate=estate@entry=0x842eec100) at\n../src/backend/commands/trigger.c:5085\n[22:46:12.838] #2 0x000000000065bfba in CopyFrom (cstate=0x836df9038)\nat ../src/backend/commands/copyfrom.c:1293\n...\n\nWhile a patch in this series does change\nsrc/backend/commands/trigger.c, I'm not yet sure about its relation\nwith the backtrace shown there.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Nov 2023 13:29:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Reviewing 0001:\n\nPerhaps ExecEndCteScan needs an adjustment. What if node->leader was never set?\n\nOther than that, I think this is in good shape. Maybe there are other\nthings we'd want to adjust here, or maybe there aren't, but there\ndoesn't seem to be any good reason to bundle more changes into the\nsame patch.\n\nReviewing 0002 and beyond:\n\nI think it's good that you have tried to divide up a big change into\nlittle pieces, but I'm finding the result difficult to understand. It\ndoesn't really seem like each patch stands on its own. I keep flipping\nbetween patches to try to understand why other patches are doing\nthings, which kind of defeats the purpose of splitting stuff up. For\nexample, 0002 adds a NodeTag field to QueryDesc, but it doesn't even\nseem to initialize that field, let alone use it for anything. It adds\na CachedPlan pointer to QueryDesc too, and adapts CreateQueryDesc to\nallow one as an argument, but none of the callers actually pass\nanything. I suspect that that the first change (adding a NodeTag)\nfield is a bug, and that the second one is intentional, but it's hard\nto tell without flipping through all of the other patches to see how\nthey build on what 0002 does. And even when something isn't a bug,\nit's also hard to tell whether it's the right design, again because\nyou can't consider each patch in isolation. Ideally, splitting a patch\nset should bring related changes together in a single patch and push\nunrelated changes apart into different patches, but I don't really see\nthis particular split having that effect.\n\nThere is a chicken and egg problem here, to be fair. If we add code\nthat can make plan initialization fail without teaching the planner to\ncope with failures, then we have broken the server, and if we do the\nreverse, then we have a bunch of dead code that we can't test. Neither\nis very satisfactory. But I still hope there's some better division\npossible than what you have here currently. For instance, I wonder if\nit would be possible to add all the stuff to cope with plan\ninitialization failing and then have a test patch that makes\ninitialization randomly fail with some probability (or maybe you can\neven cause failures at specific points). Then you could test that\ninfrastructure by running the regression tests in a loop with various\nvalues of the relevant setting.\n\nAnother overall comment that I have is that it doesn't feel like\nthere's enough high-level explanation of the design. I don't know how\nmuch of that should go in comments vs. commit messages vs. a README\nthat accompanies the patch set vs. whatever else, and I strongly\nsuspect that some of the stuff that seems confusing now is actually\nstuff that at one point I understood and have just forgotten about.\nBut rediscovering it shouldn't be quite so hard. For example, consider\nthe question \"why are we storing the CachedPlan in the QueryDesc?\" I\neventually figured out that it's so that ExecPlanStillValid can call\nCachedPlanStillValid which can then consult the cached plan's is_valid\nflag. But is that the only access to the CachedPlan that we ever\nexpect to occur via the QueryDesc? If not, what else is allowable? If\nso, why not just store a Boolean in the QueryDesc and arrange for the\nplancache to be able to flip it when invalidating? I'm not saying\nthat's a better design -- I'm saying that it looks hard to understand\nyour thought process from the patch set. And also, you know, assuming\nthe current design is correct, could there be some way of dividing up\nthe patch set so that this one change, where we add the CachedPlan to\nthe QueryDesc, isn't so spread out across the whole series?\n\nSome more detailed review comments below. This isn't really a full\nreview because I don't understand the patches well enough for that,\nbut it's some stuff I noticed.\n\nIn 0002:\n\n+ * result-rel info, etc. Also, we don't pass the parent't copy of the\n\nTypo.\n\n+ /*\n+ * All the necessary locks must already have been taken when\n+ * initializing the parent's copy of subplanstate, so the CachedPlan,\n+ * if any, should not have become invalid during ExecInitNode().\n+ */\n+ Assert(ExecPlanStillValid(rcestate));\n\nThis -- and the other similar instance -- feel very uncomfortable.\nThere's a lot of action at a distance here. If this assertion ever\nfailed, how would anyone ever figure out what went wrong? You wouldn't\nfor example know which object got invalidated, presumably\ncorresponding to a lock that you failed to take. Unless the problem\nwere easily reproducible in a test environment, trying to guess what\nhappened might be pretty awful; imagine seeing this assertion failure\nin a customer log file and trying to back-track to the find the\nunderlying bug. A further problem is that what would actually happen\nis you *wouldn't* see this in the customer log file, because\nassertions wouldn't be enabled, so you'd just see queries occasionally\nreturning wrong answers, I guess? Or crashing in some other random\npart of the code? Which seems even worse. At a minimum I think this\nshould be upgraded to a test-and-elog, and maybe there's some value in\ntrying to think of what should get printed by that elog to facilitate\nproper debugging, if it happens.\n\nIn 0003:\n\n+ *\n+ * OK to ignore the return value; plan can't become invalid,\n+ * because there's no CachedPlan.\n */\n- ExecutorStart(cstate->queryDesc, 0);\n+ (void) ExecutorStart(cstate->queryDesc, 0);\n\nThis also feels awkward, for similar reasons. Sure, it shouldn't\nreturn false, but also, if it did, you'd just blindly continue. Maybe\nthere should be test-and-elog here too. Or maybe this is an indication\nthat we need less action at a distance. Like, if ExecutorStart took\nthe CachedPlan as an argument instead of feeding it through the\nQueryDesc, then you could document that ExecutorStart returns true if\nthat value is passed as NULL and true or false otherwise. Here,\nwhether ExecutorStart can return true or false depends on the contents\nof the queryDesc ... which, granted, in this case is just built a line\nor two before anyway, but if you just passed to to ExecutorStart then\nyou wouldn't need to feed it through the QueryDesc, it seems to me.\nEven better, maybe there should be ExecutorStart() that continues\nreturning void and ExecutorStartExtended() that takes a cached plan as\nan additional argument and returns a bool.\n\n /*\n- * Check that ExecutorFinish was called, unless in\nEXPLAIN-only mode. This\n- * Assert is needed because ExecutorFinish is new as of 9.1, and callers\n- * might forget to call it.\n+ * Check that ExecutorFinish was called, unless in\nEXPLAIN-only mode or if\n+ * execution was canceled. This Assert is needed because\nExecutorFinish is\n+ * new as of 9.1, and callers might forget to call it.\n */\n\nMaybe we could drop the second sentence at this point.\n\nIn 0005:\n\n+ * XXX Maybe we should we skip calling\nExecCheckPermissions from\n+ * InitPlan in a parallel worker.\n\nWhy? If the thinking is to save overhead, then perhaps try to assess\nthe overhead. If the thinking is that we don't want it to fail\nspuriously, then we have to weight that against the (security) risk of\nsucceeding spuriously.\n\n+ * Returns true if current transaction holds a lock on the given relation of\n+ * mode 'lockmode'. If 'orstronger' is true, a stronger lockmode is also OK.\n+ * (\"Stronger\" is defined as \"numerically higher\", which is a bit\n+ * semantically dubious but is OK for the purposes we use this for.)\n\nI don't particularly enjoy seeing this comment cut and pasted into\nsome new place. Especially the tongue-in-cheek parenthetical part.\nBetter to refer to the original comment or something instead of\ncut-and-pasting. Also, why is it appropriate to pass orstronger = true\nhere? Don't we expect the *exact* lock mode that we have planned to be\nheld, and isn't it a sure sign of a bug if it isn't? Maybe orstronger\nshould just be ripped out here (and the comment could then go away\ntoo).\n\nIn 0006:\n\n+ /*\n+ * RTIs of all partitioned tables whose children are scanned by\n+ * appendplans. The list contains a bitmapset for every partition tree\n+ * covered by this Append.\n+ */\n\nThe first sentence of this comment makes this sound like a list of\nintegers, the RTIs of all partitioned tables that are scanned. The\nsecond sentence makes it sound like a list of bitmapsets, but what\ndoes it mean to take about each partition tree covered by this Append?\n\nThis is far from a complete review but I'm running out of steam for\ntoday. I hope that it's at least somewhat useful.\n\n...Robert\n\n\n", "msg_date": "Wed, 6 Dec 2023 13:52:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, 20 Nov 2023 at 10:00, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Sep 28, 2023 at 5:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Sep 26, 2023 at 10:06 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > After sleeping on this, I think we do need the checks after all the\n> > > ExecInitNode() calls too, because we have many instances of the code\n> > > like the following one:\n> > >\n> > > outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);\n> > > tupDesc = ExecGetResultType(outerPlanState(gatherstate));\n> > > <some code that dereferences outDesc>\n> > >\n> > > If outerNode is a SeqScan and ExecInitSeqScan() returned early because\n> > > ExecOpenScanRelation() detected that plan was invalidated, then\n> > > tupDesc would be NULL in this case, causing the code to crash.\n> > >\n> > > Now one might say that perhaps we should only add the\n> > > is-CachedPlan-valid test in the instances where there is an actual\n> > > risk of such misbehavior, but that could lead to confusion, now or\n> > > later. It seems better to add them after every ExecInitNode() call\n> > > while we're inventing the notion, because doing so relieves the\n> > > authors of future enhancements of the ExecInit*() routines from\n> > > worrying about any of this.\n> > >\n> > > Attached 0003 should show how that turned out.\n> > >\n> > > Updated 0002 as mentioned in the previous reply -- setting pointers to\n> > > NULL after freeing them more consistently across various ExecEnd*()\n> > > routines and using the `if (pointer != NULL)` style over the `if\n> > > (pointer)` more consistently.\n> > >\n> > > Updated 0001's commit message to remove the mention of its relation to\n> > > any future commits. I intend to push it tomorrow.\n> >\n> > Pushed that one. Here are the rebased patches.\n> >\n> > 0001 seems ready to me, but I'll wait a couple more days for others to\n> > weigh in. Just to highlight a kind of change that others may have\n> > differing opinions on, consider this hunk from the patch:\n> >\n> > - MemoryContextDelete(node->aggcontext);\n> > + if (node->aggcontext != NULL)\n> > + {\n> > + MemoryContextDelete(node->aggcontext);\n> > + node->aggcontext = NULL;\n> > + }\n> > ...\n> > + ExecEndNode(outerPlanState(node));\n> > + outerPlanState(node) = NULL;\n> >\n> > So the patch wants to enhance the consistency of setting the pointer\n> > to NULL after freeing part. Robert mentioned his preference for doing\n> > it in the patch, which I agree with.\n>\n> Rebased.\n\nThere is a leak reported at [1], details for the same is available at [2]:\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/select_views.out\n/tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/select_views.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/select_views.out\n2023-12-19 23:00:04.677385000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/select_views.out\n2023-12-19 23:06:26.870259000 +0000\n@@ -1288,6 +1288,7 @@\n (102, '2011-10-12', 120),\n (102, '2011-10-28', 200),\n (103, '2011-10-15', 480);\n+WARNING: resource was not closed: relation \"customer_pkey\"\n CREATE VIEW my_property_normal AS\n SELECT * FROM customer WHERE name = current_user;\n CREATE VIEW my_property_secure WITH (security_barrier) A\n\n[1] - https://cirrus-ci.com/task/6494009196019712\n[2] - https://api.cirrus-ci.com/v1/artifact/task/6494009196019712/testrun/build/testrun/regress-running/regress/regression.diffs\n\nRegards,\nVingesh\n\n\n", "msg_date": "Fri, 5 Jan 2024 16:16:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "\n\n> On 6 Dec 2023, at 23:52, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I hope that it's at least somewhat useful.\n> \n\n\n> On 5 Jan 2024, at 15:46, vignesh C <vignesh21@gmail.com> wrote:\n> \n> There is a leak reported \n\nHi Amit,\n\nthis is a kind reminder that some feedback on your patch[0] is waiting for your reply.\nThank you for your work!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://commitfest.postgresql.org/47/3478/\n\n", "msg_date": "Sun, 31 Mar 2024 10:03:31 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi Andrey,\n\nOn Sun, Mar 31, 2024 at 2:03 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 6 Dec 2023, at 23:52, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I hope that it's at least somewhat useful.\n>\n> > On 5 Jan 2024, at 15:46, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > There is a leak reported\n>\n> Hi Amit,\n>\n> this is a kind reminder that some feedback on your patch[0] is waiting for your reply.\n> Thank you for your work!\n\nThanks for moving this to the next CF.\n\nMy apologies (especially to Robert) for not replying on this thread\nfor a long time.\n\nI plan to start working on this soon.\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 8 Apr 2024 17:39:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, 20 Jan 2023 at 08:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I spent some time re-reading this whole thread, and the more I read\n> the less happy I got. We are adding a lot of complexity and introducing\n> coding hazards that will surely bite somebody someday. And after awhile\n> I had what felt like an epiphany: the whole problem arises because the\n> system is wrongly factored. We should get rid of AcquireExecutorLocks\n> altogether, allowing the plancache to hand back a generic plan that\n> it's not certain of the validity of, and instead integrate the\n> responsibility for acquiring locks into executor startup. It'd have\n> to be optional there, since we don't need new locks in the case of\n> executing a just-planned plan; but we can easily add another eflags\n> bit (EXEC_FLAG_GET_LOCKS or so). Then there has to be a convention\n> whereby the ExecInitNode traversal can return an indicator that\n> \"we failed because the plan is stale, please make a new plan\".\n\nI also reread the entire thread up to this point yesterday. I've also\nbeen thinking about this recently as Amit has mentioned it to me a few\ntimes over the past few months.\n\nWith the caveat of not yet having looked at the latest patch, my\nthoughts are that having the executor startup responsible for taking\nlocks is a bad idea and I don't think we should go down this path. My\nreasons are:\n\n1. No ability to control the order that the locks are obtained. The\norder in which the locks are taken will be at the mercy of the plan\nthe planner chooses.\n2. It introduces lots of complexity regarding how to cleanly clean up\nafter a failed executor startup which is likely to make exec startup\nslower and the code more complex\n3. It puts us even further down the path of actually needing an\nexecutor startup phase.\n\nFor #1, the locks taken for SELECT queries are less likely to conflict\nwith other locks obtained by PostgreSQL, but at least at the moment if\nsomeone is getting deadlocks with a DDL type operation, they can\nchange their query or DDL script so that locks are taken in the same\norder. If we allowed executor startup to do this then if someone\ncomes complaining that PG18 deadlocks when PG17 didn't we'd just have\nto tell them to live with it. There's a comment at the bottom of\nfind_inheritance_children_extended() just above the qsort() which\nexplains about the deadlocking issue.\n\nI don't have much extra to say about #2. As mentioned, I've not\nlooked at the patch. On paper, it sounds possible, but it also sounds\nbug-prone and ugly.\n\nFor #3, I've been thinking about what improvements we can do to make\nthe executor more efficient. In [1], Andres talks about some very\ninteresting things. In particular, in his email items 3) and 5) are\nrelevant here. If we did move lots of executor startup code into the\nplanner, I think it would be possible to one day get rid of executor\nstartup and have the plan record how much memory is needed for the\nnon-readonly part of the executor state and tag each plan node with\nthe offset in bytes they should use for their portion of the executor\nworking state. This would be a single memory allocation for the entire\nplan. The exact details are not important here, but I feel like if we\nload up executor startup with more responsibilities, it'll just make\ndoing something like this harder. The init run-time pruning code that\nI worked on likely already has done that, but I don't think it's\nclosed the door on it as it might just mean allocating more executor\nstate memory than we need to. Providing the plan node records the\noffset into that memory, I think it could be made to work, just with\nthe inefficiency of having a (possibly) large unused hole in that\nstate memory.\n\nAs far as I understand it, your objection to the original proposal is\njust on the grounds of concerns about introducing hazards that could\nturn into bugs. I think we could come up with some way to make the\nprior method of doing pruning before executor startup work. I think\nwhat Amit had before your objection was starting to turn into\nsomething workable and we should switch back to working on that.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20180525033538.6ypfwcqcxce6zkjj%40alap3.anarazel.de\n\n\n", "msg_date": "Sun, 19 May 2024 12:39:24 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> With the caveat of not yet having looked at the latest patch, my\n> thoughts are that having the executor startup responsible for taking\n> locks is a bad idea and I don't think we should go down this path.\n\nOK, it's certainly still up for argument, but ...\n\n> 1. No ability to control the order that the locks are obtained. The\n> order in which the locks are taken will be at the mercy of the plan\n> the planner chooses.\n\nI do not think I buy this argument, because plancache.c doesn't\nprovide any \"ability to control the order\" today, and never has.\nThe order in which AcquireExecutorLocks re-gets relation locks is only\nweakly related to the order in which the parser/planner got them\noriginally. The order in which AcquirePlannerLocks re-gets the locks\nis even less related to the original. This doesn't cause any big\nproblems that I'm aware of, because these locks are fairly weak.\n\nI think we do have a guarantee that for partitioned tables, parents\nwill be locked before children, and that's probably valuable.\nBut an executor-driven lock order could preserve that property too.\n\n> 2. It introduces lots of complexity regarding how to cleanly clean up\n> after a failed executor startup which is likely to make exec startup\n> slower and the code more complex\n\nPerhaps true, I'm not sure. But the patch we'd been discussing\nbefore this proposal was darn complex as well.\n\n> 3. It puts us even further down the path of actually needing an\n> executor startup phase.\n\nHuh? We have such a thing already.\n\n> For #1, the locks taken for SELECT queries are less likely to conflict\n> with other locks obtained by PostgreSQL, but at least at the moment if\n> someone is getting deadlocks with a DDL type operation, they can\n> change their query or DDL script so that locks are taken in the same\n> order. If we allowed executor startup to do this then if someone\n> comes complaining that PG18 deadlocks when PG17 didn't we'd just have\n> to tell them to live with it. There's a comment at the bottom of\n> find_inheritance_children_extended() just above the qsort() which\n> explains about the deadlocking issue.\n\nThe reason it's important there is that function is (sometimes)\nused for lock modes that *are* exclusive.\n\n> For #3, I've been thinking about what improvements we can do to make\n> the executor more efficient. In [1], Andres talks about some very\n> interesting things. In particular, in his email items 3) and 5) are\n> relevant here. If we did move lots of executor startup code into the\n> planner, I think it would be possible to one day get rid of executor\n> startup and have the plan record how much memory is needed for the\n> non-readonly part of the executor state and tag each plan node with\n> the offset in bytes they should use for their portion of the executor\n> working state.\n\nI'm fairly skeptical about that idea. The entire reason we have an\nissue here is that we want to do runtime partition pruning, which\nby definition can't be done at plan time. So I doubt it's going\nto play nice with what we are trying to accomplish in this thread.\n\nMoreover, while \"replace a bunch of small pallocs with one big one\"\nwould save some palloc effort, what are you going to do to ensure\nthat that memory has the right initial contents? I think this idea is\nlikely to make the executor a great deal more notationally complex\nwithout actually buying all that much. Maybe Andres can make it work,\nbut I don't want to contort other parts of the system design on the\npurely hypothetical basis that this might happen.\n\n> I think what Amit had before your objection was starting to turn into\n> something workable and we should switch back to working on that.\n\nThe reason I posted this idea was that I didn't think the previously\nexisting patch looked promising at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 21:27:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Sun, 19 May 2024 at 13:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > 1. No ability to control the order that the locks are obtained. The\n> > order in which the locks are taken will be at the mercy of the plan\n> > the planner chooses.\n>\n> I do not think I buy this argument, because plancache.c doesn't\n> provide any \"ability to control the order\" today, and never has.\n> The order in which AcquireExecutorLocks re-gets relation locks is only\n> weakly related to the order in which the parser/planner got them\n> originally. The order in which AcquirePlannerLocks re-gets the locks\n> is even less related to the original. This doesn't cause any big\n> problems that I'm aware of, because these locks are fairly weak.\n\nIt may not bite many people, it's just that if it does, I don't see\nwhat we could do to help those people. At the moment we could tell\nthem to adjust their DDL script to obtain the locks in the same order\nas their query. With your idea that cannot be done as the order could\nchange when the planner switches the join order.\n\n> I think we do have a guarantee that for partitioned tables, parents\n> will be locked before children, and that's probably valuable.\n> But an executor-driven lock order could preserve that property too.\n\nI think you'd have to lock the parent before the child. That would\nremain true and consistent anyway when taking locks during a\nbreadth-first plan traversal.\n\n> > For #3, I've been thinking about what improvements we can do to make\n> > the executor more efficient. In [1], Andres talks about some very\n> > interesting things. In particular, in his email items 3) and 5) are\n> > relevant here. If we did move lots of executor startup code into the\n> > planner, I think it would be possible to one day get rid of executor\n> > startup and have the plan record how much memory is needed for the\n> > non-readonly part of the executor state and tag each plan node with\n> > the offset in bytes they should use for their portion of the executor\n> > working state.\n>\n> I'm fairly skeptical about that idea. The entire reason we have an\n> issue here is that we want to do runtime partition pruning, which\n> by definition can't be done at plan time. So I doubt it's going\n> to play nice with what we are trying to accomplish in this thread.\n\nI think we could have both, providing there was a way to still\ntraverse the executor state tree in EXPLAIN. We'd need a way to skip\nportions of the plan that are not relevant or could be invalid for the\ncurrent execution. e.g can't show Index Scan because index has been\ndropped.\n\n> > I think what Amit had before your objection was starting to turn into\n> > something workable and we should switch back to working on that.\n>\n> The reason I posted this idea was that I didn't think the previously\n> existing patch looked promising at all.\n\nOk. It would be good if you could expand on that so we could\ndetermine if there's some fundamental reason it can't work or if\nthat's because you were blinded by your epiphany and didn't give that\nany thought after thinking of the alternative idea.\n\nI've gone to effort to point out things that I think are concerning\nwith your idea. It would be good if you could do the same for the\nprevious patch other than \"it didn't look promising\". It's pretty hard\nfor me to argue with that level of detail.\n\nDavid\n\n\n", "msg_date": "Sun, 19 May 2024 13:51:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Sun, May 19, 2024 at 9:39 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> For #1, the locks taken for SELECT queries are less likely to conflict\n> with other locks obtained by PostgreSQL, but at least at the moment if\n> someone is getting deadlocks with a DDL type operation, they can\n> change their query or DDL script so that locks are taken in the same\n> order. If we allowed executor startup to do this then if someone\n> comes complaining that PG18 deadlocks when PG17 didn't we'd just have\n> to tell them to live with it. There's a comment at the bottom of\n> find_inheritance_children_extended() just above the qsort() which\n> explains about the deadlocking issue.\n\nThought to chime in on this.\n\nA deadlock may occur with the execution-time locking proposed in the\npatch if the DDL script makes assumptions about how a cached plan's\nexecution determines the locking order for children of multiple parent\nrelations. Specifically, the deadlock can happen if the script tries\nto lock the child relations directly, instead of locking them through\ntheir respective parent relations. The patch doesn't change the order\nof locking of relations mentioned in the query, because that's defined\nin AcquirePlannerLocks().\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 20 May 2024 21:13:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "I had occasion to run the same benchmark you described in the initial\nemail in this thread. To do so I applied patch series v49 on top of\n07cb29737a4e, which is just one that happened to have the same date as\nv49.\n\nI then used a script like this (against a server having\nplan_cache_mode=force_generic_mode)\n\nfor numparts in 0 1 2 4 8 16 32 48 64 80 81 96 127 128 160 200 256 257 288 300 384 512 1024 1536 2048; do\n pgbench testdb -i --partitions=$numparts 2>/dev/null\n echo -ne \"$numparts\\t\"\n pgbench -n testdb -S -T30 -Mprepared | grep \"^tps\" | sed -e 's/^tps = \\([0-9.]*\\) .*/\\1/'\ndone\n\nand did the same with the commit mentioned above (that is, unpatched).\nI got this table as result\n\n partitions │ patched │ 07cb29737a \n────────────┼──────────────┼──────────────\n 0 │ 65632.090431 │ 68967.712741\n 1 │ 68096.641831 │ 65356.587223\n 2 │ 59456.507575 │ 60884.679464\n 4 │ 62097.426 │ 59698.747104\n 8 │ 58044.311175 │ 57817.104562\n 16 │ 59741.926563 │ 52549.916262\n 32 │ 59261.693449 │ 44815.317215\n 48 │ 59047.125629 │ 38362.123652\n 64 │ 59748.738797 │ 34051.158525\n 80 │ 59276.839183 │ 32026.135076\n 81 │ 62318.572932 │ 30418.122933\n 96 │ 59678.857163 │ 28478.113651\n 127 │ 58761.960028 │ 24272.303742\n 128 │ 59934.268306 │ 24275.214593\n 160 │ 56688.790899 │ 21119.043564\n 200 │ 56323.188599 │ 18111.212849\n 256 │ 55915.22466 │ 14753.953709\n 257 │ 57810.530461 │ 15093.497575\n 288 │ 56874.780092 │ 13873.332162\n 300 │ 57222.056549 │ 13463.768946\n 384 │ 54073.77295 │ 11183.558339\n 512 │ 37503.766847 │ 8114.32532\n 1024 │ 42746.866448 │ 4468.41359\n 1536 │ 39500.58411 │ 3049.984599\n 2048 │ 36988.519486 │ 2269.362006\n\nwhere already at 16 partitions we can see that things are going downhill\nwith the unpatched code. (However, what happens when the table is not\npartitioned looks a bit funny.)\n\nI hope we can get this new executor code in 18.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:09:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Jun 20, 2024 at 2:09 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I hope we can get this new executor code in 18.\n\nThanks for doing the benchmark, Alvaro, and sorry for the late reply.\n\nYes, I'm hoping to get *some* version of this into v18. I've been\nthinking how to move this forward and I'm starting to think that we\nshould go back to or at least consider as an option the old approach\nof changing the plancache to do the initial runtime pruning instead of\nchanging the executor to take locks, which is the design that the\nlatest patch set tries to implement.\n\nHere are the challenges facing the implementation of the current design:\n\n1. I went through many iterations of the changes to ExecInitNode() to\nreturn a partially initialized PlanState tree when it detects that the\nCachedPlan was invalidated after locking a child table and to\nExecEndNode() to account for the PlanState tree sometimes being\npartially initialized, but it still seems fragile and bug-prone to me.\nIt might be because this approach is fundamentally hard to get right\nor I haven't invested enough effort in becoming more confident in its\nrobustness.\n\n2. Refactoring needed due to the ExecutorStart() API change especially\nthat pertaining to portals does not seem airtight. I'm especially\nworried about moving the ExecutorStart() call for the\nPORTAL_MULTI_QUERY case from where it is currently to PortalStart().\nThat requires additional bookkeeping in PortalData and I am not\ntotally sure that the snapshot handling changes after that move are\nentirely correct.\n\n3. The need to add *back* the fields to store the RT indexes of\nrelations that are not looked at by ExecInitNode() traversal such as\nroot partitioned tables and non-leaf partitions.\n\nI'm worried about #2 the most. One complaint about the previous\ndesign was that the interface changes to capture and pass the result\nof doing initial pruning in plancache.c to the executor did not look\ngreat. However, after having tried doing #2, the changes to pass the\npruning result into the executor and changes to reuse it in\nExecInit[Merge]Append() seem a tad bit simpler than the refactoring\nand adjustments needed to handle failed ExecutorStart() calls, at\nmultiple code sites.\n\nAbout #1, I tend to agree with David that adding complexity around\nPlanState tree construction may not be a good idea, because we might\nwant to rethink Plan initialization code and data structures in the\nnot too distant future. One idea I thought of is to take the\nremaining locks (to wit, those on inheritance children if running a\ncached plan) at the beginning of InitPlan(), that is before\nExecInitNode(), like we handle the permission checking, so that we\ndon't need to worry about ever returning a partially initialized\nPlanState tree. However, we're still left with the tall task to\nimplement #2 such that it doesn't break anything.\n\nAnother concern about the old design was the unnecessary overhead of\ninitializing bitmapset fields in PlannedStmt that are meant for the\nlocking algorithm in AcquireExecutorLocks(). Andres suggested an idea\nofflist to either piggyback on cursorOptions argument of\npg_plan_queries() or adding a new boolean parameter to let the planner\nknow if the plan is one that might get cached and thus have\nAcquireExecutorLocks() called on it. Another idea David and I\ndiscussed offlist is inventing a RTELockInfo (cf RTEPermissionInfo)\nand only creating one for each RT entry that is un-prunable and do\naway with PlannedStmt.rtable. For partitioned tables, that entry will\npoint to the PartitionPruneInfo that will contain the RT indexes of\npartitions (or maybe just OIDs) mapped from their subplan indexes that\nare returned by the pruning code. So AcquireExecutorLocks() will lock\nall un-prunable relations by referring to their RTELockInfo entries\nand for each entry that points to a PartitionPruneInfo with initial\npruning steps, will only lock the partitions that survive the pruning.\n\nI am planning to polish that old patch set and post after playing with\nthose new ideas.\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 12 Aug 2024 21:54:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Aug 12, 2024 at 8:54 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> 1. I went through many iterations of the changes to ExecInitNode() to\n> return a partially initialized PlanState tree when it detects that the\n> CachedPlan was invalidated after locking a child table and to\n> ExecEndNode() to account for the PlanState tree sometimes being\n> partially initialized, but it still seems fragile and bug-prone to me.\n> It might be because this approach is fundamentally hard to get right\n> or I haven't invested enough effort in becoming more confident in its\n> robustness.\n\nCan you give some examples of what's going wrong, or what you think\nmight go wrong?\n\nI didn't think there was a huge problem here based on previous\ndiscussion, but I could very well be missing some important challenge.\n\n> 2. Refactoring needed due to the ExecutorStart() API change especially\n> that pertaining to portals does not seem airtight. I'm especially\n> worried about moving the ExecutorStart() call for the\n> PORTAL_MULTI_QUERY case from where it is currently to PortalStart().\n> That requires additional bookkeeping in PortalData and I am not\n> totally sure that the snapshot handling changes after that move are\n> entirely correct.\n\nHere again, it would help to see exactly what you had to do and what\nconsequences you think it might have. But it sounds like you're\ntalking about moving ExecutorStart() from PortalStart() to PortalRun()\nand I agree that sounds like it might have user-visible behavioral\nconsequences that we don't want.\n\n> 3. The need to add *back* the fields to store the RT indexes of\n> relations that are not looked at by ExecInitNode() traversal such as\n> root partitioned tables and non-leaf partitions.\n\nI don't remember exactly why we removed those or what the benefit was,\nso I'm not sure how big of a problem it is if we have to put them\nback.\n\n> About #1, I tend to agree with David that adding complexity around\n> PlanState tree construction may not be a good idea, because we might\n> want to rethink Plan initialization code and data structures in the\n> not too distant future.\n\nLike Tom, I don't really buy this. There might be a good reason not to\ndo this in ExecutorStart(), but the hypothetical possibility that we\nmight want to change something and that this patch might make it\nharder is not it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Aug 2024 15:23:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Aug 15, 2024 at 4:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Aug 12, 2024 at 8:54 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > 1. I went through many iterations of the changes to ExecInitNode() to\n> > return a partially initialized PlanState tree when it detects that the\n> > CachedPlan was invalidated after locking a child table and to\n> > ExecEndNode() to account for the PlanState tree sometimes being\n> > partially initialized, but it still seems fragile and bug-prone to me.\n> > It might be because this approach is fundamentally hard to get right\n> > or I haven't invested enough effort in becoming more confident in its\n> > robustness.\n>\n> Can you give some examples of what's going wrong, or what you think\n> might go wrong?\n>\n> I didn't think there was a huge problem here based on previous\n> discussion, but I could very well be missing some important challenge.\n\nTBH, it's more of a hunch that people who are not involved in this\ndevelopment might find the new reality, whereby the execution is not\nracefree until ExecutorRun(), hard to reason about.\n\nThat's perhaps true with the other approach too whereby one would need\nto consult a separate data structure that records the result of\npruning done in plancache.c to be sure if a given node of the plan\ntree coming from a CachedPlan is safe to execute or do something with.\n\n> > 2. Refactoring needed due to the ExecutorStart() API change especially\n> > that pertaining to portals does not seem airtight. I'm especially\n> > worried about moving the ExecutorStart() call for the\n> > PORTAL_MULTI_QUERY case from where it is currently to PortalStart().\n> > That requires additional bookkeeping in PortalData and I am not\n> > totally sure that the snapshot handling changes after that move are\n> > entirely correct.\n>\n> Here again, it would help to see exactly what you had to do and what\n> consequences you think it might have. But it sounds like you're\n> talking about moving ExecutorStart() from PortalStart() to PortalRun()\n> and I agree that sounds like it might have user-visible behavioral\n> consequences that we don't want.\n\nLet's specifically looks at this block of code in PortalRunMulti():\n\n /*\n * Must always have a snapshot for plannable queries. First time\n * through, take a new snapshot; for subsequent queries in the\n * same portal, just update the snapshot's copy of the command\n * counter.\n */\n if (!active_snapshot_set)\n {\n Snapshot snapshot = GetTransactionSnapshot();\n\n /* If told to, register the snapshot and save in portal */\n if (setHoldSnapshot)\n {\n snapshot = RegisterSnapshot(snapshot);\n portal->holdSnapshot = snapshot;\n }\n\n /*\n * We can't have the holdSnapshot also be the active one,\n * because UpdateActiveSnapshotCommandId would complain. So\n * force an extra snapshot copy. Plain PushActiveSnapshot\n * would have copied the transaction snapshot anyway, so this\n * only adds a copy step when setHoldSnapshot is true. (It's\n * okay for the command ID of the active snapshot to diverge\n * from what holdSnapshot has.)\n */\n PushCopiedSnapshot(snapshot);\n\n /*\n * As for PORTAL_ONE_SELECT portals, it does not seem\n * necessary to maintain portal->portalSnapshot here.\n */\n\n active_snapshot_set = true;\n }\n else\n UpdateActiveSnapshotCommandId();\n\nWithout the patch, the code immediately following this does a\nCreateQueryDesc(), which \"registers\" the above copied snapshot,\nfollowed by ExecutorStart() immediately followed by ExecutorRun(), for\neach query in the list for the PORTAL_RUN_MULTI case.\n\nWith the patch, CreateQueryDesc() and ExecutorStart() are moved to\nPortalStart() so that QueryDescs including the PlanState trees for all\nqueries are built before any is run. Why? So that if ExecutorStart()\nfails for any query in the list, we can simply throw out the QueryDesc\nand the PlanState trees of the previous queries (NOT run them) and ask\nplancache for a new CachedPlan for the list of queries. We don't have\na way to ask plancache.c to replan only a given query in the list.\n\nBecause of that reshuffling, the above block also needed to be moved\nto PortalStart() along with the CommandCounterIncrement() between\nqueries. That requires the following non-trivial changes:\n\n* A copy of the snapshot needs to be created for each statement after\nthe 1st one to be able to perform UpdateActiveSnapshotCommandId() on\nit.\n\n* In PortalRunMulti(), PushActiveSnapshot() must now be done for every\nquery because the executor expects the copy in the given query's\nQueryDesc to match the ActiveSnapshot.\n\n* There's no longer CCI() between queries in PortalRunMulti() because\nthe snapshots in each query's QueryDesc must have been adjusted to\nreflect the correct command counter. I've checked but can't really be\nsure if the value in the snapshot is all anyone ever uses if they want\nto know the current value of the command counter.\n\nThere is likely to be performance regression for the multi-query cases\ndue to this handling of snapshots and maybe even correctness issues.\n\n> > 3. The need to add *back* the fields to store the RT indexes of\n> > relations that are not looked at by ExecInitNode() traversal such as\n> > root partitioned tables and non-leaf partitions.\n>\n> I don't remember exactly why we removed those or what the benefit was,\n> so I'm not sure how big of a problem it is if we have to put them\n> back.\n\nWe removed those in commit 52ed730d511b after commit f2343653f5b2\nremoved redundant execution-time locking of non-leaf relations. So we\nremoved them because we realized that execution time locking is\nunnecessary given that AcquireExecutorLocks() exists and now we want\nto add them back because we'd like to get rid of\nAcquireExecutorLocks(). :-)\n\nI'm attaching a rebased version of the patch that implements the\ncurrent design because the cfbot has been broken for a while and also\nin case you or anyone else wants to take another look. I've combined\n2 patches into one -- one that dealt with the executor side changes to\naccount for locking and another that dealt with caller side changes to\nhandle executor returning when the CachedPlan becomes invalid.\n\n--\nThanks, Amit Langote", "msg_date": "Thu, 15 Aug 2024 21:57:40 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Aug 15, 2024 at 8:57 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> TBH, it's more of a hunch that people who are not involved in this\n> development might find the new reality, whereby the execution is not\n> racefree until ExecutorRun(), hard to reason about.\n\nI'm confused by what you mean here by \"racefree\". A race means\nmultiple sessions are doing stuff at the same time and the result\ndepends on who does what first, but the executor stuff is all\nbackend-private. Heavyweight locks are not backend-private, but those\nwould be taken in ExectorStart(), not ExecutorRun(), IIUC.\n\n> With the patch, CreateQueryDesc() and ExecutorStart() are moved to\n> PortalStart() so that QueryDescs including the PlanState trees for all\n> queries are built before any is run. Why? So that if ExecutorStart()\n> fails for any query in the list, we can simply throw out the QueryDesc\n> and the PlanState trees of the previous queries (NOT run them) and ask\n> plancache for a new CachedPlan for the list of queries. We don't have\n> a way to ask plancache.c to replan only a given query in the list.\n\nI agree that moving this from PortalRun() to PortalStart() seems like\na bad idea, especially in view of what you write below.\n\n> * There's no longer CCI() between queries in PortalRunMulti() because\n> the snapshots in each query's QueryDesc must have been adjusted to\n> reflect the correct command counter. I've checked but can't really be\n> sure if the value in the snapshot is all anyone ever uses if they want\n> to know the current value of the command counter.\n\nI don't think anything stops somebody wanting to look at the current\nvalue of the command counter. I also don't think you can remove the\nCommandCounterIncrement() calls between successive queries, because\nthen they won't see the effects of earlier calls. So this sounds\nbroken to me.\n\nAlso keep in mind that one of the queries could call a function which\ndoes something that bumps the command counter again. I'm not sure if\nthat creates its own hazzard separate from the lack of CCIs, or\nwhether it's just another part of that same issue. But you can't\nassume that each query's snapshot should have a command counter value\none more than the previous query.\n\nWhile this all seems bad for the partially-initialized-execution-tree\napproach, I wonder if you don't have problems here with the other\ndesign, too. Let's say you've the multi-query case and there are 2\nqueries. The first one (Q1) is SELECT mysterious_function() and the\nsecond one (Q2) is SELECT * FROM range_partitioned_table WHERE\nkey_column = 42. What if mysterious_function() performs DDL on\nrange_partitioned_table? I haven't tested this so maybe there are\nthings going on here that prevent trouble, but it seems like executing\nQ1 can easily invalidate the plan for Q2. And then it seems like\nyou're basically back to the same problem.\n\n> > > 3. The need to add *back* the fields to store the RT indexes of\n> > > relations that are not looked at by ExecInitNode() traversal such as\n> > > root partitioned tables and non-leaf partitions.\n> >\n> > I don't remember exactly why we removed those or what the benefit was,\n> > so I'm not sure how big of a problem it is if we have to put them\n> > back.\n>\n> We removed those in commit 52ed730d511b after commit f2343653f5b2\n> removed redundant execution-time locking of non-leaf relations. So we\n> removed them because we realized that execution time locking is\n> unnecessary given that AcquireExecutorLocks() exists and now we want\n> to add them back because we'd like to get rid of\n> AcquireExecutorLocks(). :-)\n\nMy bias is to believe that getting rid of AcquireExecutorLocks() is\nprobably the right thing to do, but that's not a strongly-held\nposition and I could be totally wrong about it. The thing is, though,\nthat AcquireExecutorLocks() is fundamentally stupid, and it's hard to\nsee how it can ever be any smarter. If we want to make smarter\ndecisions about what to lock, it seems reasonable to me to think that\nthe locking code needs to be closer to code that can evaluate\nexpressions and prune partitions and stuff like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 15 Aug 2024 11:34:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Aug 16, 2024 at 12:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Aug 15, 2024 at 8:57 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > TBH, it's more of a hunch that people who are not involved in this\n> > development might find the new reality, whereby the execution is not\n> > racefree until ExecutorRun(), hard to reason about.\n>\n> I'm confused by what you mean here by \"racefree\". A race means\n> multiple sessions are doing stuff at the same time and the result\n> depends on who does what first, but the executor stuff is all\n> backend-private. Heavyweight locks are not backend-private, but those\n> would be taken in ExectorStart(), not ExecutorRun(), IIUC.\n\nSorry, yes, I meant ExecutorStart(). A backend that wants to execute\na plan tree from a CachedPlan is in a race with other backends that\nmight modify tables before ExecutorStart() takes the remaining locks.\nThat race window is bigger when it is ExecutorStart() that will take\nthe locks, and I don't mean in terms of timing, but in terms of the\nother code that can run in between GetCachedPlan() returning a\npartially valid plan and ExecutorStart() takes the remaining locks\ndepending on the calling module.\n\n> > With the patch, CreateQueryDesc() and ExecutorStart() are moved to\n> > PortalStart() so that QueryDescs including the PlanState trees for all\n> > queries are built before any is run. Why? So that if ExecutorStart()\n> > fails for any query in the list, we can simply throw out the QueryDesc\n> > and the PlanState trees of the previous queries (NOT run them) and ask\n> > plancache for a new CachedPlan for the list of queries. We don't have\n> > a way to ask plancache.c to replan only a given query in the list.\n>\n> I agree that moving this from PortalRun() to PortalStart() seems like\n> a bad idea, especially in view of what you write below.\n>\n> > * There's no longer CCI() between queries in PortalRunMulti() because\n> > the snapshots in each query's QueryDesc must have been adjusted to\n> > reflect the correct command counter. I've checked but can't really be\n> > sure if the value in the snapshot is all anyone ever uses if they want\n> > to know the current value of the command counter.\n>\n> I don't think anything stops somebody wanting to look at the current\n> value of the command counter. I also don't think you can remove the\n> CommandCounterIncrement() calls between successive queries, because\n> then they won't see the effects of earlier calls. So this sounds\n> broken to me.\n\nI suppose you mean CCI between \"running\" (calling ExecutorRun on)\nsuccessive queries. Then the patch is indeed broken. If we're to\nmake that right, the number of CCIs for the multi-query portals will\nhave to double given the separation of ExecutorStart() and\nExecutorRun() phases.\n\n> Also keep in mind that one of the queries could call a function which\n> does something that bumps the command counter again. I'm not sure if\n> that creates its own hazzard separate from the lack of CCIs, or\n> whether it's just another part of that same issue. But you can't\n> assume that each query's snapshot should have a command counter value\n> one more than the previous query.\n>\n> While this all seems bad for the partially-initialized-execution-tree\n> approach, I wonder if you don't have problems here with the other\n> design, too. Let's say you've the multi-query case and there are 2\n> queries. The first one (Q1) is SELECT mysterious_function() and the\n> second one (Q2) is SELECT * FROM range_partitioned_table WHERE\n> key_column = 42. What if mysterious_function() performs DDL on\n> range_partitioned_table? I haven't tested this so maybe there are\n> things going on here that prevent trouble, but it seems like executing\n> Q1 can easily invalidate the plan for Q2. And then it seems like\n> you're basically back to the same problem.\n\nA rule (but not views AFAICS) can lead to the multi-query case (there\nmight be other ways). I tried the following, and, yes, the plan for\nthe query queued by the rule is broken by the execution of that for\nthe 1st query:\n\ncreate table foo (a int);\ncreate table bar (a int);\ncreate or replace function foo_trig_func () returns trigger as $$\nbegin drop table bar cascade; return new.*; end; $$ language plpgsql;\ncreate trigger foo_trig before insert on foo execute function foo_trig_func();\ncreate rule insert_foo AS ON insert TO foo do also insert into bar\nvalues (new.*);\nset plan_cache_mode to force_generic_plan ;\nprepare q as insert into foo values (1);\nexecute q;\nNOTICE: drop cascades to rule insert_foo on table foo\nERROR: relation with OID 16418 does not exist\n\nThe ERROR comes from trying to run (actually \"initialize\") the cached\nplan for `insert into bar values (new.*);` which is due to the rule.\n\nThough, it doesn't have to be a cached plan for the breakage to\nhappen. You can see the same error without the prepared statement:\n\ninsert into foo values (1);\nNOTICE: drop cascades to rule insert_foo on table foo\nERROR: relation with OID 16418 does not exist\n\nAnother example:\n\ncreate or replace function foo_trig_func () returns trigger as $$\nbegin alter table bar add b int; return new.*; end; $$ language\nplpgsql;\nexecute q;\nERROR: table row type and query-specified row type do not match\nDETAIL: Query has too few columns.\n\ninsert into foo values (1);\nERROR: table row type and query-specified row type do not match\nDETAIL: Query has too few columns.\n\nThis time the error occurs in ExecModifyTable(), so when \"running\" the\nplan, but again the code that's throwing the error is just \"lazy\"\ninitialization of the ProjectionInfo when inserting into bar.\n\nSo it is possible for the executor to try to run a plan that has\nbecome invalid since it was created, so...\n\n> > > > 3. The need to add *back* the fields to store the RT indexes of\n> > > > relations that are not looked at by ExecInitNode() traversal such as\n> > > > root partitioned tables and non-leaf partitions.\n> > >\n> > > I don't remember exactly why we removed those or what the benefit was,\n> > > so I'm not sure how big of a problem it is if we have to put them\n> > > back.\n> >\n> > We removed those in commit 52ed730d511b after commit f2343653f5b2\n> > removed redundant execution-time locking of non-leaf relations. So we\n> > removed them because we realized that execution time locking is\n> > unnecessary given that AcquireExecutorLocks() exists and now we want\n> > to add them back because we'd like to get rid of\n> > AcquireExecutorLocks(). :-)\n>\n> My bias is to believe that getting rid of AcquireExecutorLocks() is\n> probably the right thing to do, but that's not a strongly-held\n> position and I could be totally wrong about it. The thing is, though,\n> that AcquireExecutorLocks() is fundamentally stupid, and it's hard to\n> see how it can ever be any smarter. If we want to make smarter\n> decisions about what to lock, it seems reasonable to me to think that\n> the locking code needs to be closer to code that can evaluate\n> expressions and prune partitions and stuff like that.\n\nOne perhaps crazy idea [1]:\n\nWhat if we remove AcquireExecutorLocks() and move the responsibility\nof taking the remaining necessary locks into the executor (those on\nany inheritance children that are added during planning and thus not\naccounted for by AcquirePlannerLocks()), like the patch already does,\nbut don't make it also check if the plan has become invalid, which it\ncan't do anyway unless it's from a CachedPlan. That means we instead\nlet the executor throw any errors that occur when trying to either\ninitialize the plan because of the changes that have occurred to the\nobjects referenced in the plan, like what is happening in the above\nexample. If that case is going to be rare anway, why spend energy on\nchecking the validity and replan, especially if that's not an easy\nthing to do as we're finding out. In the above example, we could say\nthat it's a user error to create a rule like that, so it should not\nhappen in practice, but when it does, the executor seems to deal with\nit correctly by refusing to execute a broken plan . Perhaps it's more\nworthwhile to make the executor behave correctly in face of plan\ninvalidation than teach the rest of the system to deal with the\nexecutor throwing its hands up when it runs into an invalid plan?\nAgain, I think this may be a crazy line of thinking but just wanted to\nget it out there.\n\n--\nThanks, Amit Langote\n\n[1] I recall Michael Paquier mentioning something like this to me once\nwhen I was describing this patch and thread to him.\n\n\n", "msg_date": "Fri, 16 Aug 2024 21:35:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Aug 16, 2024 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> So it is possible for the executor to try to run a plan that has\n> become invalid since it was created, so...\n\nI'm not sure what the \"so what\" here is.\n\n> One perhaps crazy idea [1]:\n>\n> What if we remove AcquireExecutorLocks() and move the responsibility\n> of taking the remaining necessary locks into the executor (those on\n> any inheritance children that are added during planning and thus not\n> accounted for by AcquirePlannerLocks()), like the patch already does,\n> but don't make it also check if the plan has become invalid, which it\n> can't do anyway unless it's from a CachedPlan. That means we instead\n> let the executor throw any errors that occur when trying to either\n> initialize the plan because of the changes that have occurred to the\n> objects referenced in the plan, like what is happening in the above\n> example. If that case is going to be rare anway, why spend energy on\n> checking the validity and replan, especially if that's not an easy\n> thing to do as we're finding out. In the above example, we could say\n> that it's a user error to create a rule like that, so it should not\n> happen in practice, but when it does, the executor seems to deal with\n> it correctly by refusing to execute a broken plan . Perhaps it's more\n> worthwhile to make the executor behave correctly in face of plan\n> invalidation than teach the rest of the system to deal with the\n> executor throwing its hands up when it runs into an invalid plan?\n> Again, I think this may be a crazy line of thinking but just wanted to\n> get it out there.\n\nI don't know whether this is crazy or not. I think there are two\nissues. One, the set of checks that we have right now might not be\ncomplete, and we might just not have realized that because it happens\ninfrequently enough that we haven't found all the bugs. If that's so,\nthen a change like this could be a good thing, because it might force\nus to fix stuff we should be fixing anyway. I have a feeling that some\nof the checks you hit there were added as bug fixes long after the\ncode was written originally, so my confidence that we don't have more\nbugs isn't especially high.\n\nAnd two, it matters a lot how frequent the errors will be in practice.\nI think we normally try to replan rather than let a stale plan be used\nbecause we want to not fail, because users don't like failure. If the\ndesign you propose here would make failures more (or less) frequent,\nthen that's a problem (or awesome).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:39:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 16, 2024 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> So it is possible for the executor to try to run a plan that has\n>> become invalid since it was created, so...\n\n> I'm not sure what the \"so what\" here is.\n\nThe fact that there are holes in our protections against that doesn't\nmake it a good idea to walk away from the protections. That path\nleads to crashes and data corruption and unhappy users.\n\nWhat the examples here are showing is that AcquireExecutorLocks\nis incomplete because it only provides defenses against DDL\ninitiated by other sessions, not by our own session. We have\nCheckTableNotInUse but I'm not sure if it could be applied here.\nWe certainly aren't calling that in anywhere near as systematic\na way as we have for acquiring locks.\n\nMaybe we should rethink the principle that a session's locks\nnever conflict against itself, although I fear that might be\na nasty can of worms.\n\nCould it work to do CheckTableNotInUse when acquiring an\nexclusive table lock? I don't doubt that we'd have to fix some\ncode paths, but if the damage isn't extensive then that\nmight offer a more nearly bulletproof approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:54:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What the examples here are showing is that AcquireExecutorLocks\n> is incomplete because it only provides defenses against DDL\n> initiated by other sessions, not by our own session. We have\n> CheckTableNotInUse but I'm not sure if it could be applied here.\n> We certainly aren't calling that in anywhere near as systematic\n> a way as we have for acquiring locks.\n>\n> Maybe we should rethink the principle that a session's locks\n> never conflict against itself, although I fear that might be\n> a nasty can of worms.\n\nIt might not be that bad. It could replace the CheckTableNotInUse()\nprotections that we have today but maybe cover more cases, and it\ncould do so without needing any changes to the shared lock manager.\nSay every time you start a query you give that query an ID number, and\nall locks taken by that query are tagged with that ID number in the\nlocal lock table, and maybe some flags indicating why the lock was\ntaken. When a new lock acquisition comes along you can say \"oh, this\nlock was previously taken so that we could do thus-and-so\" and then\nuse that to fail with the appropriate error message. That seems like\nit might be more powerful than the refcnt check within\nCheckTableNotInUse().\n\nBut that seems somewhat incidental to what this thread is about. IIUC,\nAmit's original design involved having the plan cache call some new\nexecutor function to do partition pruning before lock acquisition, and\nthen passing that data structure around, including back to the\nexecutor, so that we didn't repeat the pruning we already did, which\nwould be a bad thing to do not only because it would incur CPU cost\nbut also because really bad things would happen if we got a different\nanswer the second time. IIUC, you didn't think that was going to work\nout nicely, and suggested instead moving the pruning+locking to\nExecutorStart() time. But now Amit is finding problems with that\napproach, because by the time we reach PortalRun() for the\nPORTAL_MULTI_QUERY case, it's too late to replan, because we can't ask\nthe plancache to replan just one query from the list; and if we try to\nfix that by moving ExecutorStart() to PortalStart(), then there are\nother problems. Do you have a view on what the way forward might be?\n\nThis thread has gotten a tad depressing, honestly. All of the opinions\nabout what we ought to do seem to be based on the firm conviction that\nX or Y or Z will not work, rather than on the confidence that A or B\nor C will work. Yet I'm inclined to believe this problem is solvable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 13:38:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> But that seems somewhat incidental to what this thread is about.\n\nPerhaps. But if we're running into issues related to that, it might\nbe good to set aside the long-term goal for a bit and come up with\na cleaner answer for intra-session locking. That could allow the\npruning problem to be solved more cleanly in turn, and it'd be\nan improvement even if not.\n\n> Do you have a view on what the way forward might be?\n\nI'm fresh out of ideas at the moment, other than having a hope that\ndivide-and-conquer (ie, solving subproblems first) might pay off.\n\n> This thread has gotten a tad depressing, honestly. All of the opinions\n> about what we ought to do seem to be based on the firm conviction that\n> X or Y or Z will not work, rather than on the confidence that A or B\n> or C will work. Yet I'm inclined to believe this problem is solvable.\n\nYeah. We are working in an extremely not-green field here, which\nmeans it's a lot easier to see pre-existing reasons why X will not\nwork than to have confidence that it will work. But hey, if this\nwere easy then we'd have done it already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 13:52:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Aug 19, 2024 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > But that seems somewhat incidental to what this thread is about.\n>\n> Perhaps. But if we're running into issues related to that, it might\n> be good to set aside the long-term goal for a bit and come up with\n> a cleaner answer for intra-session locking. That could allow the\n> pruning problem to be solved more cleanly in turn, and it'd be\n> an improvement even if not.\n\nMaybe, but the pieces aren't quite coming together for me. Solving\nthis would mean that if we execute a stale plan, we'd be more likely\nto get a good error and less likely to get a bad, nasty-looking\ninternal error, or a crash. That's good on its own terms, but we don't\nreally want user queries to produce errors at all, so I don't think\nwe'd feel any more free to rearrange the order of operations than we\ndo today.\n\n> > Do you have a view on what the way forward might be?\n>\n> I'm fresh out of ideas at the moment, other than having a hope that\n> divide-and-conquer (ie, solving subproblems first) might pay off.\n\nFair enough, but why do you think that the original approach of\ncreating a data structure from within the plan cache mechanism\n(probably via a call into some new executor entrypoint) and then\nfeeding that through to ExecutorRun() time can't work? Is it possible\nyou latched onto some non-optimal decisions that the early versions of\nthe patch made, rather than there being a fundamental problem with the\nconcept?\n\nI actually thought the do-it-at-executorstart-time approach sounded\npretty good, even though we might have to abandon planstate tree\ninitialization partway through, right up until Amit started talking\nabout moving ExecutorStart() from PortalRun() to PortalStart(), which\nI have a feeling is going to create a bigger problem than we can\nsolve. I think if we want to save that approach, we should try to\nfigure out if we can teach the plancache to replan one query from a\nlist without replanning the others, which seems like it might allow us\nto keep the order of major operations unchanged. Otherwise, it makes\nsense to me to have another go at the other approach, at least to make\nsure we understand clearly why it can't work.\n\n> Yeah. We are working in an extremely not-green field here, which\n> means it's a lot easier to see pre-existing reasons why X will not\n> work than to have confidence that it will work. But hey, if this\n> were easy then we'd have done it already.\n\nYeah, true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:20:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 20, 2024 at 1:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 16, 2024 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > So it is possible for the executor to try to run a plan that has\n> > become invalid since it was created, so...\n>\n> I'm not sure what the \"so what\" here is.\n\nI meant that if the executor has to deal with broken plans anyway, we\nmight as well lean into that fact by choosing not to handle only the\ncached plan case in a certain way. Yes, I understand that that's not\na good justification.\n\n> > One perhaps crazy idea [1]:\n> >\n> > What if we remove AcquireExecutorLocks() and move the responsibility\n> > of taking the remaining necessary locks into the executor (those on\n> > any inheritance children that are added during planning and thus not\n> > accounted for by AcquirePlannerLocks()), like the patch already does,\n> > but don't make it also check if the plan has become invalid, which it\n> > can't do anyway unless it's from a CachedPlan. That means we instead\n> > let the executor throw any errors that occur when trying to either\n> > initialize the plan because of the changes that have occurred to the\n> > objects referenced in the plan, like what is happening in the above\n> > example. If that case is going to be rare anway, why spend energy on\n> > checking the validity and replan, especially if that's not an easy\n> > thing to do as we're finding out. In the above example, we could say\n> > that it's a user error to create a rule like that, so it should not\n> > happen in practice, but when it does, the executor seems to deal with\n> > it correctly by refusing to execute a broken plan . Perhaps it's more\n> > worthwhile to make the executor behave correctly in face of plan\n> > invalidation than teach the rest of the system to deal with the\n> > executor throwing its hands up when it runs into an invalid plan?\n> > Again, I think this may be a crazy line of thinking but just wanted to\n> > get it out there.\n>\n> I don't know whether this is crazy or not. I think there are two\n> issues. One, the set of checks that we have right now might not be\n> complete, and we might just not have realized that because it happens\n> infrequently enough that we haven't found all the bugs. If that's so,\n> then a change like this could be a good thing, because it might force\n> us to fix stuff we should be fixing anyway. I have a feeling that some\n> of the checks you hit there were added as bug fixes long after the\n> code was written originally, so my confidence that we don't have more\n> bugs isn't especially high.\n\nThis makes sense.\n\n> And two, it matters a lot how frequent the errors will be in practice.\n> I think we normally try to replan rather than let a stale plan be used\n> because we want to not fail, because users don't like failure. If the\n> design you propose here would make failures more (or less) frequent,\n> then that's a problem (or awesome).\n\nI think we'd modify plancache.c to postpone the locking of only\nprunable relations (i.e., partitions), so we're looking at only a\nhandful of concurrent modifications that are going to cause execution\nerrors. That's because we disallow many DDL modifications of\npartitions unless they are done via recursion from the parent, so the\nspace of errors in practice would be smaller compared to if we were to\npostpone *all* cached plan locks to ExecInitNode() time. DROP INDEX\na_partion_only_index comes to mind as something that might cause an\nerror. I've not tested if other partition-only constraints can cause\nunsafe behaviors.\n\nPerhaps, we can add the check for CachedPlan.is_valid after every\ntable_open() and index_open() in the executor that takes a lock or at\nall the places we discussed previously and throw the error (say:\n\"cached plan is no longer valid\") if it's false. That's better than\nrunning into and throwing into some random error by soldiering ahead\nwith its initialization / execution, but still a loss in terms of user\nexperience because we're adding a new failure mode, however rare.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 20 Aug 2024 22:00:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 20, 2024 at 3:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Aug 19, 2024 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > But that seems somewhat incidental to what this thread is about.\n> >\n> > Perhaps. But if we're running into issues related to that, it might\n> > be good to set aside the long-term goal for a bit and come up with\n> > a cleaner answer for intra-session locking. That could allow the\n> > pruning problem to be solved more cleanly in turn, and it'd be\n> > an improvement even if not.\n>\n> Maybe, but the pieces aren't quite coming together for me. Solving\n> this would mean that if we execute a stale plan, we'd be more likely\n> to get a good error and less likely to get a bad, nasty-looking\n> internal error, or a crash. That's good on its own terms, but we don't\n> really want user queries to produce errors at all, so I don't think\n> we'd feel any more free to rearrange the order of operations than we\n> do today.\n\nYeah, it's unclear whether executing a potentially stale plan is an\nacceptable tradeoff compared to replanning, especially if it occurs\nrarely. Personally, I would prefer that it is.\n\n> > > Do you have a view on what the way forward might be?\n> >\n> > I'm fresh out of ideas at the moment, other than having a hope that\n> > divide-and-conquer (ie, solving subproblems first) might pay off.\n>\n> Fair enough, but why do you think that the original approach of\n> creating a data structure from within the plan cache mechanism\n> (probably via a call into some new executor entrypoint) and then\n> feeding that through to ExecutorRun() time can't work?\n\nThat would be ExecutorStart(). The data structure need not be\nreferenced after ExecInitNode().\n\n> Is it possible\n> you latched onto some non-optimal decisions that the early versions of\n> the patch made, rather than there being a fundamental problem with the\n> concept?\n>\n> I actually thought the do-it-at-executorstart-time approach sounded\n> pretty good, even though we might have to abandon planstate tree\n> initialization partway through, right up until Amit started talking\n> about moving ExecutorStart() from PortalRun() to PortalStart(), which\n> I have a feeling is going to create a bigger problem than we can\n> solve. I think if we want to save that approach, we should try to\n> figure out if we can teach the plancache to replan one query from a\n> list without replanning the others, which seems like it might allow us\n> to keep the order of major operations unchanged. Otherwise, it makes\n> sense to me to have another go at the other approach, at least to make\n> sure we understand clearly why it can't work.\n\n+1\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 20 Aug 2024 22:14:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 20, 2024 at 9:00 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I think we'd modify plancache.c to postpone the locking of only\n> prunable relations (i.e., partitions), so we're looking at only a\n> handful of concurrent modifications that are going to cause execution\n> errors. That's because we disallow many DDL modifications of\n> partitions unless they are done via recursion from the parent, so the\n> space of errors in practice would be smaller compared to if we were to\n> postpone *all* cached plan locks to ExecInitNode() time. DROP INDEX\n> a_partion_only_index comes to mind as something that might cause an\n> error. I've not tested if other partition-only constraints can cause\n> unsafe behaviors.\n\nThis seems like a valid point to some extent, but in other contexts\nwe've had discussions about how we don't actually guarantee all that\nmuch uniformity between a partitioned table and its partitions, and\nit's been questioned whether we made the right decisions there. So I'm\nnot entirely sure that the surface area for problems here will be as\nnarrow as you're hoping -- I think we'd need to go through all of the\nALTER TABLE variants and think it through. But maybe the problems\naren't that bad.\n\nIt does seem like constraints can change the plan. Imagine the\npartition had a CHECK(false) constraint before and now doesn't, or\nsomething.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 10:53:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Aug 20, 2024 at 11:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Aug 20, 2024 at 9:00 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I think we'd modify plancache.c to postpone the locking of only\n> > prunable relations (i.e., partitions), so we're looking at only a\n> > handful of concurrent modifications that are going to cause execution\n> > errors. That's because we disallow many DDL modifications of\n> > partitions unless they are done via recursion from the parent, so the\n> > space of errors in practice would be smaller compared to if we were to\n> > postpone *all* cached plan locks to ExecInitNode() time. DROP INDEX\n> > a_partion_only_index comes to mind as something that might cause an\n> > error. I've not tested if other partition-only constraints can cause\n> > unsafe behaviors.\n>\n> This seems like a valid point to some extent, but in other contexts\n> we've had discussions about how we don't actually guarantee all that\n> much uniformity between a partitioned table and its partitions, and\n> it's been questioned whether we made the right decisions there. So I'm\n> not entirely sure that the surface area for problems here will be as\n> narrow as you're hoping -- I think we'd need to go through all of the\n> ALTER TABLE variants and think it through. But maybe the problems\n> aren't that bad.\n\nMany changeable properties that are reflected in the RelationData of a\npartition after getting the lock on it seem to cause no issues as long\nas the executor code only looks at RelationData, which is true for\nmost Scan nodes. It also seems true for ModifyTable which looks into\nRelationData for relation properties relevant to insert/deletes.\n\nThe two things that don't cope are:\n\n* Index Scan nodes with concurrent DROP INDEX of partition-only indexes.\n\n* Concurrent DROP CONSTRAINT of partition-only CHECK and NOT NULL\nconstraints can lead to incorrect result as I write below.\n\n> It does seem like constraints can change the plan. Imagine the\n> partition had a CHECK(false) constraint before and now doesn't, or\n> something.\n\nYeah, if the CHECK constraint gets dropped concurrently, any new rows\nthat got added after that will not be returned by executing a stale\ncached plan, because the plan would have been created based on the\nassumption that such rows shouldn't be there due to the CHECK\nconstraint. We currently don't explicitly check that the constraints\nthat were used during planning still exist before executing the plan.\n\nOverall, I'm starting to feel less enthused by the idea throwing an\nerror in the executor due to known and unknown hazards of trying to\nexecute a stale plan. Even if we made a note in the docs of such\nhazards, any users who run into these rare errors are likely to head\nto -bugs or -hackers anyway.\n\nTom said we should perhaps look at the hazards caused by intra-session\nlocking, but we'd still be left with the hazards of missing index and\nconstraints, AFAICS, due to DROP from other sessions.\n\nSo, the options:\n\n* The replanning aspect of the lock-in-the-executor design would be\nsimpler if a CachedPlan contained the plan for a single query rather\nthan a list of queries, as previously mentioned. This is particularly\ndue to the requirements of the PORTAL_MULTI_QUERY case. However, this\noption might be impractical.\n\n* Polish the patch for the old design of doing the initial pruning\nbefore AcquireExecutorLocks() and focus on hashing out any bugs and\nissues of that design.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Wed, 21 Aug 2024 21:45:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Aug 21, 2024 at 8:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> * The replanning aspect of the lock-in-the-executor design would be\n> simpler if a CachedPlan contained the plan for a single query rather\n> than a list of queries, as previously mentioned. This is particularly\n> due to the requirements of the PORTAL_MULTI_QUERY case. However, this\n> option might be impractical.\n\nIt might be, but maybe it would be worth a try? I mean,\nGetCachedPlan() seems to just call pg_plan_queries() which just loops\nover the list of query trees and does the same thing for each one. If\nwe wanted to replan a single query, why couldn't we do\nfake_querytree_list = list_make1(list_nth(querytree_list, n)) and then\ncall pg_plan_queries(fake_querytree_list)? Or something equivalent to\nthat. We could have a new GetCachedSinglePlan(cplan, n) to do this.\n\n> * Polish the patch for the old design of doing the initial pruning\n> before AcquireExecutorLocks() and focus on hashing out any bugs and\n> issues of that design.\n\nThat's also an option. It probably has issues too, but I don't know\nwhat they are exactly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 09:10:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Wed, Aug 21, 2024 at 10:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Aug 21, 2024 at 8:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > * The replanning aspect of the lock-in-the-executor design would be\n> > simpler if a CachedPlan contained the plan for a single query rather\n> > than a list of queries, as previously mentioned. This is particularly\n> > due to the requirements of the PORTAL_MULTI_QUERY case. However, this\n> > option might be impractical.\n>\n> It might be, but maybe it would be worth a try? I mean,\n> GetCachedPlan() seems to just call pg_plan_queries() which just loops\n> over the list of query trees and does the same thing for each one. If\n> we wanted to replan a single query, why couldn't we do\n> fake_querytree_list = list_make1(list_nth(querytree_list, n)) and then\n> call pg_plan_queries(fake_querytree_list)? Or something equivalent to\n> that. We could have a new GetCachedSinglePlan(cplan, n) to do this.\n\nI've been hacking to prototype this, and it's showing promise. It\nhelps make the replan loop at the call sites that start the executor\nwith an invalidatable plan more localized and less prone to\naction-at-a-distance issues. However, the interface and contract of\nthe new function in my prototype are pretty specialized for the replan\nloop in this context—meaning it's not as general-purpose as\nGetCachedPlan(). Essentially, what you get when you call it is a\n'throwaway' CachedPlan containing only the plan for the query that\nfailed during ExecutorStart(), not a plan integrated into the original\nCachedPlanSource's stmt_list. A call site entering the replan loop\nwill retry the execution with that throwaway plan, release it once\ndone, and resume looping over the plans in the original list. The\ninvalid plan that remains in the original list will be discarded and\nreplanned in the next call to GetCachedPlan() using the same\nCachedPlanSource. While that may sound undesirable, I'm inclined to\nthink it's not something that needs optimization, given that we're\nexpecting this code path to be taken rarely.\n\nI'll post a version of a revamped locks-in-the-executor patch set\nusing the above function after debugging some more.\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 23 Aug 2024 21:48:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Fri, Aug 23, 2024 at 9:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 21, 2024 at 10:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Aug 21, 2024 at 8:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > * The replanning aspect of the lock-in-the-executor design would be\n> > > simpler if a CachedPlan contained the plan for a single query rather\n> > > than a list of queries, as previously mentioned. This is particularly\n> > > due to the requirements of the PORTAL_MULTI_QUERY case. However, this\n> > > option might be impractical.\n> >\n> > It might be, but maybe it would be worth a try? I mean,\n> > GetCachedPlan() seems to just call pg_plan_queries() which just loops\n> > over the list of query trees and does the same thing for each one. If\n> > we wanted to replan a single query, why couldn't we do\n> > fake_querytree_list = list_make1(list_nth(querytree_list, n)) and then\n> > call pg_plan_queries(fake_querytree_list)? Or something equivalent to\n> > that. We could have a new GetCachedSinglePlan(cplan, n) to do this.\n>\n> I've been hacking to prototype this, and it's showing promise. It\n> helps make the replan loop at the call sites that start the executor\n> with an invalidatable plan more localized and less prone to\n> action-at-a-distance issues. However, the interface and contract of\n> the new function in my prototype are pretty specialized for the replan\n> loop in this context—meaning it's not as general-purpose as\n> GetCachedPlan(). Essentially, what you get when you call it is a\n> 'throwaway' CachedPlan containing only the plan for the query that\n> failed during ExecutorStart(), not a plan integrated into the original\n> CachedPlanSource's stmt_list. A call site entering the replan loop\n> will retry the execution with that throwaway plan, release it once\n> done, and resume looping over the plans in the original list. The\n> invalid plan that remains in the original list will be discarded and\n> replanned in the next call to GetCachedPlan() using the same\n> CachedPlanSource. While that may sound undesirable, I'm inclined to\n> think it's not something that needs optimization, given that we're\n> expecting this code path to be taken rarely.\n>\n> I'll post a version of a revamped locks-in-the-executor patch set\n> using the above function after debugging some more.\n\nHere it is.\n\n0001 implements changes to defer the locking of runtime-prunable\nrelations to the executor. The new design introduces a bitmapset\nfield in PlannedStmt to distinguish at runtime between relations that\nare prunable whose locking can be deferred until ExecInitNode() and\nthose that are not and must be locked in advance. The set of prunable\nrelations can be constructed by looking at all the PartitionPruneInfos\nin the plan and checking which are subject to \"initial\" pruning steps.\nThe set of unprunable relations is obtained by subtracting those from\nthe set of all RT indexes. This design gets rid of one annoying\naspect of the old design which was the need to add specialized fields\nto store the RT indexes of partitioned relations that are not\notherwise referenced in the plan tree. That was necessary because in\nthe old design, I had removed the function AcquireExecutorLocks()\naltogether to defer the locking of all child relations to execution.\nIn the new design such relations are still locked by\nAcquireExecutorLocks().\n\n0002 is the old patch to make ExecEndNode() robust against partially\ninitialized PlanState nodes by adding NULL checks.\n\n0003 is the patch to add changes to deal with the CachedPlan becoming\ninvalid before the deferred locks on prunable relations are taken.\nI've moved the replan loop into a new wrapper-over-ExecutorStart()\nfunction instead of having the same logic at multiple sites. The\nreplan logic uses the GetSingleCachedPlan() described in the quoted\ntext. The callers of the new ExecutorStart()-wrapper, which I've\ndubbed ExecutorStartExt(), need to pass the CachedPlanSource and a\nquery_index, which is the index of the query being executed in the\nlist CachedPlanSource.query_list. They are needed by\nGetSingleCachedPlan(). The changes outside the executor are pretty\nminimal in this design and all the difficulties of having to loop back\nto GetCachedPlan() are now gone. I like how this turned out.\n\nOne idea that I think might be worth trying to reduce the footprint of\n0003 is to try to lock the prunable relations in a step of InitPlan()\nseparate from ExecInitNode(), which can be implemented by doing the\ninitial runtime pruning in that separate step. That way, we'll have\nall the necessary locks before calling ExecInitNode() and so we don't\nneed to sprinkle the CachedPlanStillValid() checks all over the place\nand worry about missed checks and dealing with partially initialized\nPlanState trees.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 29 Aug 2024 22:34:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 9:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Aug 23, 2024 at 9:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Aug 21, 2024 at 10:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Wed, Aug 21, 2024 at 8:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > * The replanning aspect of the lock-in-the-executor design would be\n> > > > simpler if a CachedPlan contained the plan for a single query rather\n> > > > than a list of queries, as previously mentioned. This is particularly\n> > > > due to the requirements of the PORTAL_MULTI_QUERY case. However, this\n> > > > option might be impractical.\n> > >\n> > > It might be, but maybe it would be worth a try? I mean,\n> > > GetCachedPlan() seems to just call pg_plan_queries() which just loops\n> > > over the list of query trees and does the same thing for each one. If\n> > > we wanted to replan a single query, why couldn't we do\n> > > fake_querytree_list = list_make1(list_nth(querytree_list, n)) and then\n> > > call pg_plan_queries(fake_querytree_list)? Or something equivalent to\n> > > that. We could have a new GetCachedSinglePlan(cplan, n) to do this.\n> >\n> > I've been hacking to prototype this, and it's showing promise. It\n> > helps make the replan loop at the call sites that start the executor\n> > with an invalidatable plan more localized and less prone to\n> > action-at-a-distance issues. However, the interface and contract of\n> > the new function in my prototype are pretty specialized for the replan\n> > loop in this context—meaning it's not as general-purpose as\n> > GetCachedPlan(). Essentially, what you get when you call it is a\n> > 'throwaway' CachedPlan containing only the plan for the query that\n> > failed during ExecutorStart(), not a plan integrated into the original\n> > CachedPlanSource's stmt_list. A call site entering the replan loop\n> > will retry the execution with that throwaway plan, release it once\n> > done, and resume looping over the plans in the original list. The\n> > invalid plan that remains in the original list will be discarded and\n> > replanned in the next call to GetCachedPlan() using the same\n> > CachedPlanSource. While that may sound undesirable, I'm inclined to\n> > think it's not something that needs optimization, given that we're\n> > expecting this code path to be taken rarely.\n> >\n> > I'll post a version of a revamped locks-in-the-executor patch set\n> > using the above function after debugging some more.\n>\n> Here it is.\n>\n> 0001 implements changes to defer the locking of runtime-prunable\n> relations to the executor. The new design introduces a bitmapset\n> field in PlannedStmt to distinguish at runtime between relations that\n> are prunable whose locking can be deferred until ExecInitNode() and\n> those that are not and must be locked in advance. The set of prunable\n> relations can be constructed by looking at all the PartitionPruneInfos\n> in the plan and checking which are subject to \"initial\" pruning steps.\n> The set of unprunable relations is obtained by subtracting those from\n> the set of all RT indexes. This design gets rid of one annoying\n> aspect of the old design which was the need to add specialized fields\n> to store the RT indexes of partitioned relations that are not\n> otherwise referenced in the plan tree. That was necessary because in\n> the old design, I had removed the function AcquireExecutorLocks()\n> altogether to defer the locking of all child relations to execution.\n> In the new design such relations are still locked by\n> AcquireExecutorLocks().\n>\n> 0002 is the old patch to make ExecEndNode() robust against partially\n> initialized PlanState nodes by adding NULL checks.\n>\n> 0003 is the patch to add changes to deal with the CachedPlan becoming\n> invalid before the deferred locks on prunable relations are taken.\n> I've moved the replan loop into a new wrapper-over-ExecutorStart()\n> function instead of having the same logic at multiple sites. The\n> replan logic uses the GetSingleCachedPlan() described in the quoted\n> text. The callers of the new ExecutorStart()-wrapper, which I've\n> dubbed ExecutorStartExt(), need to pass the CachedPlanSource and a\n> query_index, which is the index of the query being executed in the\n> list CachedPlanSource.query_list. They are needed by\n> GetSingleCachedPlan(). The changes outside the executor are pretty\n> minimal in this design and all the difficulties of having to loop back\n> to GetCachedPlan() are now gone. I like how this turned out.\n>\n> One idea that I think might be worth trying to reduce the footprint of\n> 0003 is to try to lock the prunable relations in a step of InitPlan()\n> separate from ExecInitNode(), which can be implemented by doing the\n> initial runtime pruning in that separate step. That way, we'll have\n> all the necessary locks before calling ExecInitNode() and so we don't\n> need to sprinkle the CachedPlanStillValid() checks all over the place\n> and worry about missed checks and dealing with partially initialized\n> PlanState trees.\n>\n> --\n> Thanks, Amit Langote\n\n@@ -1241,7 +1244,7 @@ GetCachedPlan(CachedPlanSource *plansource,\nParamListInfo boundParams,\n if (customplan)\n {\n /* Build a custom plan */\n- plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv);\n+ plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv, true);\n\nIs the *true* here a typo? Seems it should be *false* for custom plan?\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sat, 31 Aug 2024 20:30:34 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": false, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Sat, Aug 31, 2024 at 9:30 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> @@ -1241,7 +1244,7 @@ GetCachedPlan(CachedPlanSource *plansource,\n> ParamListInfo boundParams,\n> if (customplan)\n> {\n> /* Build a custom plan */\n> - plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv);\n> + plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv, true);\n>\n> Is the *true* here a typo? Seems it should be *false* for custom plan?\n\nThat's correct, thanks for catching that. Will fix.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 2 Sep 2024 17:19:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Mon, Sep 2, 2024 at 5:19 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Aug 31, 2024 at 9:30 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > @@ -1241,7 +1244,7 @@ GetCachedPlan(CachedPlanSource *plansource,\n> > ParamListInfo boundParams,\n> > if (customplan)\n> > {\n> > /* Build a custom plan */\n> > - plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv);\n> > + plan = BuildCachedPlan(plansource, qlist, boundParams, queryEnv, true);\n> >\n> > Is the *true* here a typo? Seems it should be *false* for custom plan?\n>\n> That's correct, thanks for catching that. Will fix.\n\nDone.\n\nI've also rewritten the new GetSingleCachedPlan() function in 0003.\nThe most glaring bug in the previous version was that the transient\nCachedPlan it creates cannot be seen by PlanCacheRelCallback() et al\nfunctions because it was intentionally not linked to the\nCachedPlanSource, so if the CachedPlan would not be invalidated even\nif some prunable relation got changed before it is locked during\nExecutorStart(). I've added a new list standalone_plan_list to add\nthese to and changed the inval callback functions to invalidate any\nplans contained in them.\n\nAnother thing I found out through testing is that CachedPlanSource can\nhave become invalid since leaving GetCachedPlan() (actually even\nbefore returning from that function) because of\nPlanCacheSysCallback(), which drops/invalidates *all* plans when a\nsyscache is invalidated. There are comments in plancache.c (see\nBuildCachedPlan()) saying that such invalidations are, in theory,\nfalse positives, but that gave me a pause nonetheless.\n\nFinally, instead of calling GetCachedPlan() from GetSingleCachedPlan()\nto create a plan for only the query whose plan got invalidated, which\nrequired a bunch of care to ensure that the CachedPlanSource is not\noverwritten with the information about this single-query planning,\nI've made GetSingleCachedPlan() create the PlannedStmt and the\ndetached CachedPlan object on its own, borrowing the minimal necessary\ncode from BuildCachedPlan() to do so.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 5 Sep 2024 18:55:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Aug 29, 2024 at 10:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> One idea that I think might be worth trying to reduce the footprint of\n> 0003 is to try to lock the prunable relations in a step of InitPlan()\n> separate from ExecInitNode(), which can be implemented by doing the\n> initial runtime pruning in that separate step. That way, we'll have\n> all the necessary locks before calling ExecInitNode() and so we don't\n> need to sprinkle the CachedPlanStillValid() checks all over the place\n> and worry about missed checks and dealing with partially initialized\n> PlanState trees.\n\nI've worked on this and found that it results in a much simpler design.\n\nAttached are 0001 and 0002, which contain patches to refactor the\nruntime pruning code. These changes move initial pruning outside of\nExecInitNode() and use the results during ExecInitNode() to determine\nthe set of child subnodes to initialize.\n\nWith that in place, the patches (0003, 0004) that move the locking of\nprunable relations from plancache.c into the executor becomes simpler.\nIt no longer needs to modify any code called by ExecInitNode(). Since\nno new locks are taken during ExecInitNode(), I didn't have to worry\nabout changing all the code involved in PlanState tree initialization\nto add checks for CachedPlan validity. The check is only needed after\nperforming initial pruning, and if the CachedPlan is invalid,\nExecInitNode() won’t be called in the first place.\n\n-- \nThanks, Amit Langote", "msg_date": "Tue, 17 Sep 2024 21:57:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Tue, Sep 17, 2024 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Aug 29, 2024 at 10:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > One idea that I think might be worth trying to reduce the footprint of\n> > 0003 is to try to lock the prunable relations in a step of InitPlan()\n> > separate from ExecInitNode(), which can be implemented by doing the\n> > initial runtime pruning in that separate step. That way, we'll have\n> > all the necessary locks before calling ExecInitNode() and so we don't\n> > need to sprinkle the CachedPlanStillValid() checks all over the place\n> > and worry about missed checks and dealing with partially initialized\n> > PlanState trees.\n>\n> I've worked on this and found that it results in a much simpler design.\n>\n> Attached are 0001 and 0002, which contain patches to refactor the\n> runtime pruning code. These changes move initial pruning outside of\n> ExecInitNode() and use the results during ExecInitNode() to determine\n> the set of child subnodes to initialize.\n>\n> With that in place, the patches (0003, 0004) that move the locking of\n> prunable relations from plancache.c into the executor becomes simpler.\n> It no longer needs to modify any code called by ExecInitNode(). Since\n> no new locks are taken during ExecInitNode(), I didn't have to worry\n> about changing all the code involved in PlanState tree initialization\n> to add checks for CachedPlan validity. The check is only needed after\n> performing initial pruning, and if the CachedPlan is invalid,\n> ExecInitNode() won’t be called in the first place.\n\nSorry, I had missed merging some hunks into 0002 that fixed obsolete\ncomments. Fixed in the attached v54.\n\nRegarding 0002, I was a bit bothered by the need to add a new function\njust to iterate over the PartitionPruningDatas and the\nPartitionedRelPruningData they contain, solely to initialize the\nPartitionPruneContext needed for exec pruning. To address this, I\npropose 0003, which moves the initialization of those contexts to be\ndone \"lazily\" in find_matching_subplan_recurse(), where they are\nactually used. To make this work, I added an is_valid flag to\nPartitionPruneContext, which is checked as follows in the code block\nwhere it's initialized:\n\n+ if (unlikely(!pprune->exec_context.is_valid))\n\nI didn't notice any overhead of adding this to\nfind_matching_partitions_recurse() which is called for every instance\nof exec pruning, so I think it's worthwhile to consider 0003.\n\nI realized that I had missed considering, in the\ndelay-locking-to-executor patch (now 0004), that there may be plan\nobjects belonging to pruned partitions, such as RowMarks and\nResultRelInfos, which should not be initialized.\nExecGetRangeTableRelation() invoked with the RT indexes in these\nobjects would cause crashes in Assert builds since the pruned\npartitions would not have been locked. I've updated the patch to\nignore RowMarks and result relations (in ModifyTable.resultRelations)\nfor pruned child relations, which required adding more accounting info\nto EState to store the bitmapset of unpruned RT indexes. For\nResultRelInfos, I took the approach of memsetting them to 0 for pruned\nresult relations and adding checks at multiple sites to ensure the\nResultRelInfo being handled is valid. I recall previously proposing\nlazy initialization for these objects when first needed [1], which\nwould make the added code unnecessary, but I might save that for\nanother time.\n\n--\nThanks, Amit Langote\n[1] https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp", "msg_date": "Thu, 19 Sep 2024 17:39:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Sep 19, 2024 at 5:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> For\n> ResultRelInfos, I took the approach of memsetting them to 0 for pruned\n> result relations and adding checks at multiple sites to ensure the\n> ResultRelInfo being handled is valid.\n\nAfter some reflection, I realized that nobody would think that that\napproach is very robust. In the attached, I’ve modified\nExecInitModifyTable() to allocate ResultRelInfos only for unpruned\nrelations, instead of allocating for all in\nModifyTable.resultRelations and setting pruned ones to 0. This\napproach feels more robust.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 19 Sep 2024 21:10:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" }, { "msg_contents": "On Thu, Sep 19, 2024 at 9:10 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Sep 19, 2024 at 5:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > For\n> > ResultRelInfos, I took the approach of memsetting them to 0 for pruned\n> > result relations and adding checks at multiple sites to ensure the\n> > ResultRelInfo being handled is valid.\n>\n> After some reflection,\n\nNot enough reflection, evidently...\n\n> I realized that nobody would think that that\n> approach is very robust. In the attached, I’ve modified\n> ExecInitModifyTable() to allocate ResultRelInfos only for unpruned\n> relations, instead of allocating for all in\n> ModifyTable.resultRelations and setting pruned ones to 0. This\n> approach feels more robust.\n\nExcept, I forgot that ModifyTable has other lists that parallel\nresultRelations (of the same length) viz. withCheckOptionLists,\nreturningLists, and updateColnosLists, which need to be similarly\ntruncated to only consider unpruned relations. I've updated 0004 to\ndo so. This was broken even in the other design where locking is\ndelayed all the way until ExecInitAppend does initial pruning(),\nbecause ResultRelInfos are created before initializing the plan\nsubtree containing the Append node, which would try to lock and open\n*all* partitions.\n\nAlso, I've switched the order of 0002 and 0003 to avoid a situation\nwhere I add a function in 0002 only to remove it in 0003. By doing\nthe refactoring to initialize PartitionPruneContexts lazily first, the\npatch to move the initial pruning to occur before ExecInitNode()\nbecame much simpler as it doesn't need to touch the code related to\nexec pruning.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 20 Sep 2024 17:10:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: generic plans and \"initial\" pruning" } ]
[ { "msg_contents": "Hi,\n\nI've revisited the idea to somehow use foreign keys to do joins,\nin the special but common case when joining on columns that exactly match a foreign key.\n\nThe idea is to add a new ternary operator, which would be allowed only in the FROM clause.\n\nIt would take three operands:\n\n1) referencing_table_alias\n2) foreign_key_constraint_name\n3) referenced_table_alias\n\nPOSSIBLE BENEFITS\n\n* Eliminate risk of joining on the wrong columns\nAlthough probably an uncommon class of bugs, a join can be made on the wrong columns, which could go undetected if the desired row is included by coincidence, such as if the test environment might only contain a single row in some table, and the join condition happened to be always true.\nBy joining using the foreign key, it can be verified at compile time, that the referenced_table_alias is actually an alias for the table referenced by the foreign key. If some other alias would be given, an error would be thrown, to avoid failure.\n\n* Conciser syntax\nIn a traditional join, you have to explicitly state all columns for the referencing and referenced table.\nI think writing joins feels like you are repeating the same table aliases and column names over and over again, all the time.\nThis is especially true for multiple-column joins.\nThis is somewhat addressed by the USING join form, but USING has other drawbacks, why I tend to avoid it except for one-off queries.\nWhen having to use fully-qualified table aliases, that adds even further to the verboseness.\n\n* Makes abnormal joins stand out\nIf joining on something else than foreign key columns, or some inequality expression, such joins will continue to be written in the traditional way, and will therefore stand out and be more visible, if all other foreign key-based joins are written using the new syntax.\nWhen reading SQL queries, I think this would be a great improvement, since the boring normal joins on foreign keys could be given less attention, and focus could instead be made on making sure you understand the more complex joins.\n\n* Explicit direction of the join\nIn a traditional join on foreign key columns, it's not possible to derive if the join is a one-to-many or many-to-one join, by just looking at the SQL code itself. One must also know/inspect the data model or make assumptions based on the naming of columns and tables. This is perhaps the least interesting benefit though, since good naming makes the direction quite obvious anyway. But I think it at least reduces the total cognitive load of reading a SQL query.\n\nPOSSIBLE DRAWBACKS\n\n* Another thing users would have to learn\n* Would require changes to the SQL standard, i.e. SQL committee work\n* Introduces a hard dependency on foreign keys, they cannot be dropped\n\nSYNTAX\n\nSyntax is hard, but here is a proposal to start the discussion:\n\n from_item join_type from_item WITH [referencing_table_alias]->[foreign_key_constraint_name] = [referenced_table_alias] [ AS join_using_alias ]\n\nEXAMPLE\n\nTo experiment with the idea, I wanted to find some real-world queries written by others,\nto see how such SQL queries would look like, using traditional joins vs foreign key joins.\n\nI came up with the idea of searching Github for \"LEFT JOIN\", since just searching for \"JOIN\" would match a lot of non-SQL code as well.\nHere is one of the first examples I found, a query below from the Grafana project [1]\n[1] https://github.com/grafana/grafana/blob/main/pkg/services/accesscontrol/database/resource_permissions.go\n\nSELECT\n p.*,\n ? AS resource_id,\n ur.user_id AS user_id,\n u.login AS user_login,\n u.email AS user_email,\n tr.team_id AS team_id,\n t.name AS team,\n t.email AS team_email,\n r.name as role_name\nFROM permission p\n LEFT JOIN role r ON p.role_id = r.id\n LEFT JOIN team_role tr ON r.id = tr.role_id\n LEFT JOIN team t ON tr.team_id = t.id\n LEFT JOIN user_role ur ON r.id = ur.role_id\n LEFT JOIN user u ON ur.user_id = u.id\nWHERE p.id = ?\n\nHere is how the FROM clause could be rewritten:\n\nFROM permission p\n LEFT JOIN role r WITH p->permission_role_id_fkey = r\n LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r\n LEFT JOIN team t WITH tr->team_role_team_id_fkey = t\n LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r\n LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = u\nWHERE p.id = 1;\n\nIn PostgreSQL, the foreign keys could also be given shorter names, since they only need to be unique per table and not per namespace. I think a nice convention is to give the foreign keys the same name as the referenced table, except if the same table is referenced multiple times or is self-referenced.\n\nRewriting our example, using such naming convention for the foreign keys:\n\nFROM permission p\n LEFT JOIN role r WITH p->role = r\n LEFT JOIN team_role tr WITH tr->role = r\n LEFT JOIN team t WITH tr->team = t\n LEFT JOIN user_role ur WITH ur->role = r\n LEFT JOIN \"user\" u WITH ur->user = u\nWHERE p.id = 1;\n\nA better example to illustrate how conciseness is improved, would be one with lots of multi-column joins.\nPlease feel free to share better query examples to evaluate.\n\nI cannot stop thinking about this idea, I really think it would greatly improve SQL as a language.\nForeign keys feels like such an underused valuable potential resource!\nIf someone can convince me this is a bad idea, that would help me forget about all of this,\nso I would greatly appreciate your thoughts, no matter how negative or positive.\n\nThank you for digesting.\n\n/Joel\n\nPS.\n\nTo readers who might remember the old flawed version of this new idea:\n\nIn the old proposal, the third operand (referenced_table_alias) was missing.\nThere wasn't a way of specifying what table alias the join was supposed to be made against.\nIt was assumed the referenced table was always the one being joined in,\nwhich is not always the case, since the referenced table\nmight already be in scope, and it's instead the referencing table which is being joined in.\n\nAnother problem with the old idea was you were forced to write the joins in a the same order\nas the foreign keys, which often resulted in an awkward join order.\n\nThese two problems have now been solved with this new proposal.\nPerhaps new problems have been introduced though?\n\n\nHi,I've revisited the idea to somehow use foreign keys to do joins,in the special but common case when joining on columns that exactly match a foreign key.The idea is to add a new ternary operator, which would be allowed only in the FROM clause.It would take three operands:1) referencing_table_alias2) foreign_key_constraint_name3) referenced_table_aliasPOSSIBLE BENEFITS* Eliminate risk of joining on the wrong columnsAlthough probably an uncommon class of bugs, a join can be made on the wrong columns, which could go undetected if the desired row is included by coincidence, such as if the test environment might only contain a single row in some table, and the join condition happened to be always true.By joining using the foreign key, it can be verified at compile time, that the referenced_table_alias is actually an alias for the table referenced by the foreign key. If some other alias would be given, an error would be thrown, to avoid failure.* Conciser syntaxIn a traditional join, you have to explicitly state all columns for the referencing and referenced table.I think writing joins feels like you are repeating the same table aliases and column names over and over again, all the time.This is especially true for multiple-column joins.This is somewhat addressed by the USING join form, but USING has other drawbacks, why I tend to avoid it except for one-off queries.When having to use fully-qualified table aliases, that adds even further to the verboseness.* Makes abnormal joins stand outIf joining on something else than foreign key columns, or some inequality expression, such joins will continue to be written in the traditional way, and will therefore stand out and be more visible, if all other foreign key-based joins are written using the new syntax.When reading SQL queries, I think this would be a great improvement, since the boring normal joins on foreign keys could be given less attention, and focus could instead be made on making sure you understand the more complex joins.* Explicit direction of the joinIn a traditional join on foreign key columns, it's not possible to derive if the join is a one-to-many or many-to-one join, by just looking at the SQL code itself. One must also know/inspect the data model or make assumptions based on the naming of columns and tables. This is perhaps the least interesting benefit though, since good naming makes the direction quite obvious anyway. But I think it at least reduces the total cognitive load of reading a SQL query.POSSIBLE DRAWBACKS* Another thing users would have to learn* Would require changes to the SQL standard, i.e. SQL committee work* Introduces a hard dependency on foreign keys, they cannot be droppedSYNTAXSyntax is hard, but here is a proposal to start the discussion:    from_item join_type from_item WITH [referencing_table_alias]->[foreign_key_constraint_name] = [referenced_table_alias] [ AS join_using_alias ]EXAMPLETo experiment with the idea, I wanted to find some real-world queries written by others,to see how such SQL queries would look like, using traditional joins vs foreign key joins.I came up with the idea of searching Github for \"LEFT JOIN\", since just searching for \"JOIN\" would match a lot of non-SQL code as well.Here is one of the first examples I found, a query below from the Grafana project [1][1] https://github.com/grafana/grafana/blob/main/pkg/services/accesscontrol/database/resource_permissions.goSELECT    p.*,    ? AS resource_id,    ur.user_id AS user_id,    u.login AS user_login,    u.email AS user_email,    tr.team_id AS team_id,    t.name AS team,    t.email AS team_email,    r.name as role_nameFROM permission p    LEFT JOIN role r ON p.role_id = r.id    LEFT JOIN team_role tr ON r.id = tr.role_id    LEFT JOIN team t ON tr.team_id = t.id    LEFT JOIN user_role ur ON r.id = ur.role_id    LEFT JOIN user u ON ur.user_id = u.idWHERE p.id = ?Here is how the FROM clause could be rewritten:FROM permission p    LEFT JOIN role r WITH p->permission_role_id_fkey = r    LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r    LEFT JOIN team t WITH tr->team_role_team_id_fkey = t    LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r    LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = uWHERE p.id = 1;In PostgreSQL, the foreign keys could also be given shorter names, since they only need to be unique per table and not per namespace. I think a nice convention is to give the foreign keys the same name as the referenced table, except if the same table is referenced multiple times or is self-referenced.Rewriting our example, using such naming convention for the foreign keys:FROM permission p    LEFT JOIN role r WITH p->role = r    LEFT JOIN team_role tr WITH tr->role = r    LEFT JOIN team t WITH tr->team = t    LEFT JOIN user_role ur WITH ur->role = r    LEFT JOIN \"user\" u WITH ur->user = uWHERE p.id = 1;A better example to illustrate how conciseness is improved, would be one with lots of multi-column joins.Please feel free to share better query examples to evaluate.I cannot stop thinking about this idea, I really think it would greatly improve SQL as a language.Foreign keys feels like such an underused valuable potential resource!If someone can convince me this is a bad idea, that would help me forget about all of this,so I would greatly appreciate your thoughts, no matter how negative or positive.Thank you for digesting./JoelPS.To readers who might remember the old flawed version of this new idea:In the old proposal, the third operand (referenced_table_alias) was missing.There wasn't a way of specifying what table alias the join was supposed to be made against.It was assumed the referenced table was always the one being joined in,which is not always the case, since the referenced tablemight already be in scope, and it's instead the referencing table which is being joined in.Another problem with the old idea was you were forced to write the joins in a the same orderas the foreign keys, which often resulted in an awkward join order.These two problems have now been solved with this new proposal.Perhaps new problems have been introduced though?", "msg_date": "Sat, 25 Dec 2021 21:55:51 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Foreign key joins revisited" }, { "msg_contents": "On Saturday, December 25, 2021, Joel Jacobson <joel@compiler.org> wrote:\n\n>\n> I've revisited the idea to somehow use foreign keys to do joins,\n>\n>\n-1\n\n\n> This is somewhat addressed by the USING join form, but USING has other\n> drawbacks, why I tend to avoid it except for one-off queries.\n>\n\nI find this sufficient.\n\n\n\n> * Would require changes to the SQL standard, i.e. SQL committee work\n>\n\nHuh?\n\nDavid J.\n\nOn Saturday, December 25, 2021, Joel Jacobson <joel@compiler.org> wrote:I've revisited the idea to somehow use foreign keys to do joins,-1 This is somewhat addressed by the USING join form, but USING has other drawbacks, why I tend to avoid it except for one-off queries.I find this sufficient. * Would require changes to the SQL standard, i.e. SQL committee workHuh?David J.", "msg_date": "Sat, 25 Dec 2021 14:06:44 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sat, Dec 25, 2021, at 22:06, David G. Johnston wrote:\n> On Saturday, December 25, 2021, Joel Jacobson <joel@compiler.org> wrote:\n> * Would require changes to the SQL standard, i.e. SQL committee work\n>\n> Huh?\n\nI mean, one could argue this is perhaps even the wrong forum to discuss this idea,\nsince it's a proposed change to the SQL language.\nBut I think it's still meaningful to discuss the idea here,\nsince if we can reach a consensus and work on a PoC implementation,\nthat would be very valuable when presenting the idea to the SQL committee.\n\n/Joel\nOn Sat, Dec 25, 2021, at 22:06, David G. Johnston wrote:> On Saturday, December 25, 2021, Joel Jacobson <joel@compiler.org> wrote:> * Would require changes to the SQL standard, i.e. SQL committee work>> Huh?I mean, one could argue this is perhaps even the wrong forum to discuss this idea,since it's a proposed change to the SQL language.But I think it's still meaningful to discuss the idea here,since if we can reach a consensus and work on a PoC implementation,that would be very valuable when presenting the idea to the SQL committee./Joel", "msg_date": "Sat, 25 Dec 2021 22:19:13 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sat, Dec 25, 2021, at 21:55, Joel Jacobson wrote:\n> FROM permission p\n> LEFT JOIN role r WITH p->permission_role_id_fkey = r\n> LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r\n> LEFT JOIN team t WITH tr->team_role_team_id_fkey = t\n> LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r\n> LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = u\n> WHERE p.id = 1;\n\nSomeone pointed out the part to the right of the last equal sign is redundant.\n\nAlso, \"KEY\" is perhaps a better keyword to use than \"WITH\",\nto indicate it's a join using a foreign KEY.\n\nWith these two changes, the query becomes:\n\n FROM permission p\n LEFT JOIN role r KEY p->permission_role_id_fkey\n LEFT JOIN team_role tr KEY tr->team_role_role_id_fkey\n LEFT JOIN team t KEY tr->team_role_team_id_fkey\n LEFT JOIN user_role ur KEY ur->user_role_role_id_fkey\n LEFT JOIN \"user\" u KEY ur->user_role_user_id_fkey\n WHERE p.id = 1;\n\n/Joel\nOn Sat, Dec 25, 2021, at 21:55, Joel Jacobson wrote:> FROM permission p>     LEFT JOIN role r WITH p->permission_role_id_fkey = r>     LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r>     LEFT JOIN team t WITH tr->team_role_team_id_fkey = t>     LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r>     LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = u> WHERE p.id = 1;Someone pointed out the part to the right of the last equal sign is redundant.Also, \"KEY\" is perhaps a better keyword to use than \"WITH\",to indicate it's a join using a foreign KEY.With these two changes, the query becomes:    FROM permission p    LEFT JOIN role r KEY p->permission_role_id_fkey    LEFT JOIN team_role tr KEY tr->team_role_role_id_fkey    LEFT JOIN team t KEY tr->team_role_team_id_fkey    LEFT JOIN user_role ur KEY ur->user_role_role_id_fkey    LEFT JOIN \"user\" u KEY ur->user_role_user_id_fkey    WHERE p.id = 1;/Joel", "msg_date": "Sun, 26 Dec 2021 07:46:41 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 19:33, Sascha Kuhl wrote:\n> The Syntax is great. Which language does it come from. I consider it not german. But I understand it mathematically.\n> Great extension.\n\nIt doesn't come from any language. But I've seen similar features in ORMs, such as the jOOQ Java project. [1]\n\nActually, I think jOOQ's \"ON KEY\" terminology might be something to take inspiration from.\nIn jOOQ, it's a Java method .onKey(), but I think it would look nice in SQL too:\n\n LEFT JOIN role r ON KEY p.permission_role_id_fkey\n\nI think it would be nice if we could simply using dot \".\" instead of \"->\" or whatever.\nI think it should be possible since \"ON KEY\" would avoid any ambiguity in how to interpret what comes after.\nWe would know \"permission_role_id_fkey\" is a foreign key name and not a column.\nOr is the grammar too sensitive for such creativity?\n\n[1] https://www.jooq.org/doc/latest/manual/sql-building/table-expressions/joined-tables/join-predicate-on-key/\n\n/Joel\nOn Sun, Dec 26, 2021, at 19:33, Sascha Kuhl wrote:> The Syntax is great. Which language does it come from. I consider it not german. But I understand it mathematically.> Great extension.It doesn't come from any language. But I've seen similar features in ORMs, such as the jOOQ Java project. [1]Actually, I think jOOQ's \"ON KEY\" terminology might be something to take inspiration from.In jOOQ, it's a Java method .onKey(), but I think it would look nice in SQL too:    LEFT JOIN role r ON KEY p.permission_role_id_fkeyI think it would be nice if we could simply using dot \".\" instead of \"->\" or whatever.I think it should be possible since \"ON KEY\" would avoid any ambiguity in how to interpret what comes after.We would know \"permission_role_id_fkey\" is a foreign key name and not a column.Or is the grammar too sensitive for such creativity?[1] https://www.jooq.org/doc/latest/manual/sql-building/table-expressions/joined-tables/join-predicate-on-key//Joel", "msg_date": "Sun, 26 Dec 2021 19:52:37 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "Could you make\n\nJOIN key ?\n\nJoel Jacobson <joel@compiler.org> schrieb am So., 26. Dez. 2021, 19:52:\n\n> On Sun, Dec 26, 2021, at 19:33, Sascha Kuhl wrote:\n> > The Syntax is great. Which language does it come from. I consider it not\n> german. But I understand it mathematically.\n> > Great extension.\n>\n> It doesn't come from any language. But I've seen similar features in ORMs,\n> such as the jOOQ Java project. [1]\n>\n> Actually, I think jOOQ's \"ON KEY\" terminology might be something to take\n> inspiration from.\n> In jOOQ, it's a Java method .onKey(), but I think it would look nice in\n> SQL too:\n>\n> LEFT JOIN role r ON KEY p.permission_role_id_fkey\n>\n> I think it would be nice if we could simply using dot \".\" instead of \"->\"\n> or whatever.\n> I think it should be possible since \"ON KEY\" would avoid any ambiguity in\n> how to interpret what comes after.\n> We would know \"permission_role_id_fkey\" is a foreign key name and not a\n> column.\n> Or is the grammar too sensitive for such creativity?\n>\n> [1]\n> https://www.jooq.org/doc/latest/manual/sql-building/table-expressions/joined-tables/join-predicate-on-key/\n>\n> /Joel\n>\n\nCould you makeJOIN key ?Joel Jacobson <joel@compiler.org> schrieb am So., 26. Dez. 2021, 19:52:On Sun, Dec 26, 2021, at 19:33, Sascha Kuhl wrote:> The Syntax is great. Which language does it come from. I consider it not german. But I understand it mathematically.> Great extension.It doesn't come from any language. But I've seen similar features in ORMs, such as the jOOQ Java project. [1]Actually, I think jOOQ's \"ON KEY\" terminology might be something to take inspiration from.In jOOQ, it's a Java method .onKey(), but I think it would look nice in SQL too:    LEFT JOIN role r ON KEY p.permission_role_id_fkeyI think it would be nice if we could simply using dot \".\" instead of \"->\" or whatever.I think it should be possible since \"ON KEY\" would avoid any ambiguity in how to interpret what comes after.We would know \"permission_role_id_fkey\" is a foreign key name and not a column.Or is the grammar too sensitive for such creativity?[1] https://www.jooq.org/doc/latest/manual/sql-building/table-expressions/joined-tables/join-predicate-on-key//Joel", "msg_date": "Sun, 26 Dec 2021 19:54:40 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 19:54, Sascha Kuhl wrote:\n> Could you make\n>\n> JOIN key ?\n\nNot sure what you mean.\nPerhaps you can explain by rewriting the normal query below according to your idea?\n\nSELECT *\nFROM permission p\nLEFT JOIN role r ON p.role_id = r.id\n\nGiven the foreign key on \"permission\" that references \"role\" is named \"permission_role_id_fkey\".\n\n/Joel\nOn Sun, Dec 26, 2021, at 19:54, Sascha Kuhl wrote:> Could you make>> JOIN key ?Not sure what you mean.Perhaps you can explain by rewriting the normal query below according to your idea?SELECT *FROM permission pLEFT JOIN role r ON p.role_id = r.idGiven the foreign key on \"permission\" that references \"role\" is named \"permission_role_id_fkey\"./Joel", "msg_date": "Sun, 26 Dec 2021 20:00:09 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, 26 Dec 2021 at 01:47, Joel Jacobson <joel@compiler.org> wrote:\n\n> On Sat, Dec 25, 2021, at 21:55, Joel Jacobson wrote:\n> > FROM permission p\n> > LEFT JOIN role r WITH p->permission_role_id_fkey = r\n> > LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r\n> > LEFT JOIN team t WITH tr->team_role_team_id_fkey = t\n> > LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r\n> > LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = u\n> > WHERE p.id = 1;\n>\n\nIs it going too far to omit the table name? I mean, any given foreign key\ncan only point to one other table:\n\n[....]\nLEFT JOIN FOREIGN KEY p->permission_role_id_fkey\nLEFT JOIN FOREIGN KEY tr->team_role_role_id_fkey\nLEFT JOIN FOREIGN KEY tr->team_role_team_id_fkey\nLEFT JOIN FOREIGN KEY ur->user_role_role_id_fkey\nLEFT JOIN FOREIGN KEY ur->user_role_user_id_fkey\n[....]\n\nor some such; you can determine which other table is involved from the\nforeign key.\n\nParenthetically, I'm going to mention I really wish you could us ON and\nUSING in the same join. USING (x, y, z) basically means the same as ON\n((l.x, l.y, l.z) = (r.x, r.y, r.z)); so it's clear what putting them\ntogether should mean: just take the fields listed in the USING and add them\nto the ON clause in the same way as is currently done, but allow it even if\nthere is also an explicit ON clause.\n\nOn Sun, 26 Dec 2021 at 01:47, Joel Jacobson <joel@compiler.org> wrote:On Sat, Dec 25, 2021, at 21:55, Joel Jacobson wrote:> FROM permission p>     LEFT JOIN role r WITH p->permission_role_id_fkey = r>     LEFT JOIN team_role tr WITH tr->team_role_role_id_fkey = r>     LEFT JOIN team t WITH tr->team_role_team_id_fkey = t>     LEFT JOIN user_role ur WITH ur->user_role_role_id_fkey = r>     LEFT JOIN \"user\" u WITH ur->user_role_user_id_fkey = u> WHERE p.id = 1;Is it going too far to omit the table name? I mean, any given foreign key can only point to one other table:[....]LEFT JOIN FOREIGN KEY p->permission_role_id_fkeyLEFT JOIN FOREIGN KEY tr->team_role_role_id_fkeyLEFT JOIN FOREIGN KEY tr->team_role_team_id_fkeyLEFT JOIN FOREIGN KEY ur->user_role_role_id_fkeyLEFT JOIN FOREIGN KEY ur->user_role_user_id_fkey[....]or some such; you can determine which other table is involved from the foreign key.Parenthetically, I'm going to mention I really wish you could us ON and USING in the same join. USING (x, y, z) basically means the same as ON ((l.x, l.y, l.z) = (r.x, r.y, r.z)); so it's clear what putting them together should mean: just take the fields listed in the USING and add them to the ON clause in the same way as is currently done, but allow it even if there is also an explicit ON clause.", "msg_date": "Sun, 26 Dec 2021 14:02:36 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 20:02, Isaac Morland wrote:\n> Is it going too far to omit the table name? I mean, any given foreign key can only point to one other table:\n\nThat actually how I envisioned this feature to work way back, but it doesn't work, and I'll try to explain why.\n\nAs demonstrated, we can omit the referenced_table_alias, as it must either be the table we are currently joining, or is the table that the foreign key references.\nBut we are not always following foreign keys on tables we have already joined in.\nSometimes, we need to do the opposite, to follow a foreign key on a table we have not yet joined in, and the referenced table is instead a table we have already joined in.\n\nLet's look at each row your example and see if we can work it out.\nI've added the \"FROM permission p\" and also \"AS [table alias]\",\notherwise the aliases you use won't exist.\n\n> FROM permission p\n\nThis row is obviously OK. We now have \"p\" in scope as an alias for \"permission\".\n\n> LEFT JOIN FOREIGN KEY p->permission_role_id_fkey AS r\n\nThis row would follow the FK on \"p\" and join the \"role\" table using the \"permission.role_id\" column. OK.\n\n> LEFT JOIN FOREIGN KEY tr->team_role_role_id_fkey AS tr\n\nThis is where we fail. There is no \"tr\" table alias yet! So we cannot follow the FK.\n\nThe reason why it doesn't work is because the FK is:\nFOREIGN KEY team_role (role_id) REFERENCES role\n\nThat is, the FK is on the new table we are currently joining in.\n\nOn the previous row, we followed a FK on \"p\" which was a table we had already joined in.\n\nI hope this explains the problem.\n\n/Joel\nOn Sun, Dec 26, 2021, at 20:02, Isaac Morland wrote:> Is it going too far to omit the table name? I mean, any given foreign key can only point to one other table:That actually how I envisioned this feature to work way back, but it doesn't work, and I'll try to explain why.As demonstrated, we can omit the referenced_table_alias, as it must either be the table we are currently joining, or is the table that the foreign key references.But we are not always following foreign keys on tables we have already joined in.Sometimes, we need to do the opposite, to follow a foreign key on a table we have not yet joined in, and the referenced table is instead a table we have already joined in.Let's look at each row your example and see if we can work it out.I've added the \"FROM permission p\" and also \"AS [table alias]\",otherwise the aliases you use won't exist.> FROM permission pThis row is obviously OK. We now have \"p\" in scope as an alias for \"permission\".> LEFT JOIN FOREIGN KEY p->permission_role_id_fkey AS rThis row would follow the FK on \"p\" and join the \"role\" table using the \"permission.role_id\" column. OK.> LEFT JOIN FOREIGN KEY tr->team_role_role_id_fkey AS trThis is where we fail. There is no \"tr\" table alias yet! So we cannot follow the FK.The reason why it doesn't work is because the FK is:FOREIGN KEY team_role (role_id) REFERENCES roleThat is, the FK is on the new table we are currently joining in.On the previous row, we followed a FK on \"p\" which was a table we had already joined in.I hope this explains the problem./Joel", "msg_date": "Sun, 26 Dec 2021 20:37:12 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, 26 Dec 2021 at 14:37, Joel Jacobson <joel@compiler.org> wrote:\n\n\n> Let's look at each row your example and see if we can work it out.\n> I've added the \"FROM permission p\" and also \"AS [table alias]\",\n> otherwise the aliases you use won't exist.\n>\n> > FROM permission p\n>\n> This row is obviously OK. We now have \"p\" in scope as an alias for\n> \"permission\".\n>\n> > LEFT JOIN FOREIGN KEY p->permission_role_id_fkey AS r\n>\n> This row would follow the FK on \"p\" and join the \"role\" table using the\n> \"permission.role_id\" column. OK.\n>\n> > LEFT JOIN FOREIGN KEY tr->team_role_role_id_fkey AS tr\n>\n> This is where we fail. There is no \"tr\" table alias yet! So we cannot\n> follow the FK.\n>\n> The reason why it doesn't work is because the FK is:\n> FOREIGN KEY team_role (role_id) REFERENCES role\n>\n> That is, the FK is on the new table we are currently joining in.\n>\n\nRight, sorry, that was sloppy of me. I should have noticed that I wrote\n\"tr-> ... AS tr\". But in the case where the \"source\" (referencing) table is\nalready in the join, what's wrong with allowing my suggestion? We do need\nanother way of joining to a new table using one of its foreign keys rather\nthan a foreign key on a table already in the join, but it seems the first\ncase is pretty common.\n\nOn Sun, 26 Dec 2021 at 14:37, Joel Jacobson <joel@compiler.org> wrote: Let's look at each row your example and see if we can work it out.I've added the \"FROM permission p\" and also \"AS [table alias]\",otherwise the aliases you use won't exist.> FROM permission pThis row is obviously OK. We now have \"p\" in scope as an alias for \"permission\".> LEFT JOIN FOREIGN KEY p->permission_role_id_fkey AS rThis row would follow the FK on \"p\" and join the \"role\" table using the \"permission.role_id\" column. OK.> LEFT JOIN FOREIGN KEY tr->team_role_role_id_fkey AS trThis is where we fail. There is no \"tr\" table alias yet! So we cannot follow the FK.The reason why it doesn't work is because the FK is:FOREIGN KEY team_role (role_id) REFERENCES roleThat is, the FK is on the new table we are currently joining in.Right, sorry, that was sloppy of me. I should have noticed that I wrote \"tr-> ... AS tr\". But in the case where the \"source\" (referencing) table is already in the join, what's wrong with allowing my suggestion? We do need another way of joining to a new table using one of its foreign keys rather than a foreign key on a table already in the join, but it seems the first case is pretty common.", "msg_date": "Sun, 26 Dec 2021 15:49:42 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 19:52, Joel Jacobson wrote:\n> LEFT JOIN role r ON KEY p.permission_role_id_fkey\n\nOps! I see this doesn't quite work.\nWe're missing one single bit of information.\nThat is, we need to indicate if the foreign key is\na) in the table we're currently joining\nor\nb) to some existing table we've already joined in\n\nHere comes a new proposal:\n\njoin_type from_item ON KEY foreign_key_constraint_name [IN referencing_table_alias | TO referenced_table_alias]\n\nON KEY foreign_key_constraint_name IN referencing_table_alias\n- The foreign key is in a table we've already joined in, as given by referencing_table_alias.\n\nON KEY foreign_key_constraint_name TO referenced_table_alias\n- The foreign key is in the table we're currently joining, and the foreign key references the table as given by referenced_table_alias. It's necessary to specify the alias, because the table referenced by the foreign key might have been joined in multiple times as different aliases, so we need to specify which one to join against.\n\nExample:\n\nFROM permission p\n LEFT JOIN role r ON KEY permission_role_id_fkey IN p\n LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r\n LEFT JOIN team t ON KEY team_role_team_id_fkey IN tr\n LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r\n LEFT JOIN \"user\" u ON KEY user_role_user_id_fkey IN ur\n\nThoughts?\n\n/Joel\nOn Sun, Dec 26, 2021, at 19:52, Joel Jacobson wrote:>    LEFT JOIN role r ON KEY p.permission_role_id_fkeyOps! I see this doesn't quite work.We're missing one single bit of information.That is, we need to indicate if the foreign key isa) in the table we're currently joiningorb) to some existing table we've already joined inHere comes a new proposal:join_type from_item ON KEY foreign_key_constraint_name [IN referencing_table_alias | TO referenced_table_alias]ON KEY foreign_key_constraint_name IN referencing_table_alias- The foreign key is in a table we've already joined in, as given by referencing_table_alias.ON KEY foreign_key_constraint_name TO referenced_table_alias- The foreign key is in the table we're currently joining, and the foreign key references the table as given by referenced_table_alias. It's necessary to specify the alias, because the table referenced by the foreign key might have been joined in multiple times as different aliases, so we need to specify which one to join against.Example:FROM permission p    LEFT JOIN role r ON KEY permission_role_id_fkey IN p    LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r    LEFT JOIN team t ON KEY team_role_team_id_fkey IN tr    LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r    LEFT JOIN \"user\" u ON KEY user_role_user_id_fkey IN urThoughts?/Joel", "msg_date": "Sun, 26 Dec 2021 22:00:37 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 21:49, Isaac Morland wrote:\n> Right, sorry, that was sloppy of me. I should have noticed that I wrote \"tr-> ... AS tr\". But in the case where the \"source\"\n> (referencing) table is already in the join, what's wrong with allowing my suggestion? We do need another way of joining to\n> a new table using one of its foreign keys rather than a foreign key on a table already in the join, but it seems the first case\n> is pretty common.\n\nI like your idea!\nIt's would be nice to avoid having to explicitly specify the referenced table, when simply following a foreign key on a table already in the join.\n\nBefore I read your reply, I sent a new message in this thread, suggesting a ON KEY ... [IN | TO] ... syntax.\n\nI think if we combine the ON KEY ... TO ... part of my idea, with your idea, we have a complete neat solution.\n\nMaybe we can make them a little more similar syntax wise though.\n\nCould you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?\nAnd would a simple dot work instead of ->?\n\nWe would then get:\n\nFROM permission p\n LEFT JOIN ON KEY p.permission_role_id_fkey r\n LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r\n LEFT JOIN ON KEY tr.team_role_team_id_fkey t\n LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r\n LEFT JOIN ON KEY ur.user_role_user_id_fkey u\n \nSimply following a foreign key on a table already in the join:\n LEFT JOIN ON KEY p.permission_role_id_fkey r\nHere, \"p\" is already in the join, and we follow the \"permission_role_id_fkey\" foreign key to \"role\" which we don't need to specify, but we do specify what alias we want for it, that is \"r\".\n\nIf instead joining to a new table using one of its foreign keys:\n LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r\nHere, we follow the foreign key on team_role named \"team_role_role_id_fkey\" and indicate we want to join against the table alias \"r\", which will then be asserted to actually be an instance of the \"role\" table. We need to specify the table alias, as we might have \"role\" in the join multiple times already as different aliases.\n\nThoughts?\n\nOn Sun, Dec 26, 2021, at 21:49, Isaac Morland wrote:> Right, sorry, that was sloppy of me. I should have noticed that I wrote \"tr-> ... AS tr\". But in the case where the \"source\"> (referencing) table is already in the join, what's wrong with allowing my suggestion? We do need another way of joining to> a new table using one of its foreign keys rather than a foreign key on a table already in the join, but it seems the first case> is pretty common.I like your idea!It's would be nice to avoid having to explicitly specify the referenced table, when simply following a foreign key on a table already in the join.Before I read your reply, I sent a new message in this thread, suggesting a ON KEY ... [IN | TO] ... syntax.I think if we combine the ON KEY ... TO ... part of my idea, with your idea, we have a complete neat solution.Maybe we can make them a little more similar syntax wise though.Could you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?And would a simple dot work instead of ->?We would then get:FROM permission p    LEFT JOIN ON KEY p.permission_role_id_fkey r    LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r    LEFT JOIN ON KEY tr.team_role_team_id_fkey t    LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r    LEFT JOIN ON KEY ur.user_role_user_id_fkey u Simply following a foreign key on a table already in the join:    LEFT JOIN ON KEY p.permission_role_id_fkey rHere, \"p\" is already in the join, and we follow the \"permission_role_id_fkey\" foreign key to \"role\" which we don't need to specify, but we do specify what alias we want for it, that is \"r\".If instead joining to a new table using one of its foreign keys:    LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO rHere, we follow the foreign key on team_role named \"team_role_role_id_fkey\" and indicate we want to join against the table alias \"r\", which will then be asserted to actually be an instance of the \"role\" table. We need to specify the table alias, as we might have \"role\" in the join multiple times already as different aliases.Thoughts?", "msg_date": "Sun, 26 Dec 2021 22:24:24 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, 26 Dec 2021 at 16:24, Joel Jacobson <joel@compiler.org> wrote:\n\n\n> I think if we combine the ON KEY ... TO ... part of my idea, with your\n> idea, we have a complete neat solution.\n>\n> Maybe we can make them a little more similar syntax wise though.\n>\n> Could you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?\n> And would a simple dot work instead of ->?\n>\n\nI’m not fixed on the details; writing FOREIGN KEY just felt natural, and I\ncopied the -> from the earlier messages, but I didn’t really mean to\npromote those specific syntax elements.\n\nOne question to consider: which columns get included in the join and under\nwhat names? When we join USING there is just one copy of each column in the\nUSING, not one from each source table. This is one of the nicest features\nof USING. With this new feature it seems like it might make sense to omit\nthe join fields from the added table; tricky bit is they don't necessarily\nhave the same name as existing fields as must be the case with USING.\n\nOn Sun, 26 Dec 2021 at 16:24, Joel Jacobson <joel@compiler.org> wrote: I think if we combine the ON KEY ... TO ... part of my idea, with your idea, we have a complete neat solution.Maybe we can make them a little more similar syntax wise though.Could you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?And would a simple dot work instead of ->?I’m not fixed on the details; writing FOREIGN KEY just felt natural, and I copied the -> from the earlier messages, but I didn’t really mean to promote those specific syntax elements.One question to consider: which columns get included in the join and under what names? When we join USING there is just one copy of each column in the USING, not one from each source table. This is one of the nicest features of USING. With this new feature it seems like it might make sense to omit the join fields from the added table; tricky bit is they don't necessarily have the same name as existing fields as must be the case with USING.", "msg_date": "Sun, 26 Dec 2021 16:36:41 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 22:24, Joel Jacobson wrote:\n> FROM permission p\n> LEFT JOIN ON KEY p.permission_role_id_fkey r\n> LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r\n> LEFT JOIN ON KEY tr.team_role_team_id_fkey t\n> LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r\n> LEFT JOIN ON KEY ur.user_role_user_id_fkey u\n\nI think readability can be improved by giving the foreign keys the same names as the referenced tables:\n\nFROM permission p\n LEFT JOIN ON KEY p.role r\n LEFT JOIN team_role tr ON KEY role TO r\n LEFT JOIN ON KEY tr.team t\n LEFT JOIN user_role ur ON KEY role TO r\n LEFT JOIN ON KEY ur.user u\n\nToughts?\n\n/Joel\nOn Sun, Dec 26, 2021, at 22:24, Joel Jacobson wrote:> FROM permission p>    LEFT JOIN ON KEY p.permission_role_id_fkey r>    LEFT JOIN team_role tr ON KEY team_role_role_id_fkey TO r>    LEFT JOIN ON KEY tr.team_role_team_id_fkey t>    LEFT JOIN user_role ur ON KEY user_role_role_id_fkey TO r>    LEFT JOIN ON KEY ur.user_role_user_id_fkey uI think readability can be improved by giving the foreign keys the same names as the referenced tables:FROM permission p   LEFT JOIN ON KEY p.role r   LEFT JOIN team_role tr ON KEY role TO r   LEFT JOIN ON KEY tr.team t   LEFT JOIN user_role ur ON KEY role TO r   LEFT JOIN ON KEY ur.user uToughts?/Joel", "msg_date": "Sun, 26 Dec 2021 22:38:25 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "When you join by id, the join is unique. You can have combinations of\nfields, with multiple fields. Is it a maximum fields question.\n\nIsaac Morland <isaac.morland@gmail.com> schrieb am So., 26. Dez. 2021,\n22:37:\n\n> On Sun, 26 Dec 2021 at 16:24, Joel Jacobson <joel@compiler.org> wrote:\n>\n>\n>> I think if we combine the ON KEY ... TO ... part of my idea, with your\n>> idea, we have a complete neat solution.\n>>\n>> Maybe we can make them a little more similar syntax wise though.\n>>\n>> Could you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?\n>> And would a simple dot work instead of ->?\n>>\n>\n> I’m not fixed on the details; writing FOREIGN KEY just felt natural, and I\n> copied the -> from the earlier messages, but I didn’t really mean to\n> promote those specific syntax elements.\n>\n> One question to consider: which columns get included in the join and under\n> what names? When we join USING there is just one copy of each column in the\n> USING, not one from each source table. This is one of the nicest features\n> of USING. With this new feature it seems like it might make sense to omit\n> the join fields from the added table; tricky bit is they don't necessarily\n> have the same name as existing fields as must be the case with USING.\n>\n>\n\nWhen you join by id, the join is unique. You can have combinations of fields, with multiple fields. Is it a maximum fields question.Isaac Morland <isaac.morland@gmail.com> schrieb am So., 26. Dez. 2021, 22:37:On Sun, 26 Dec 2021 at 16:24, Joel Jacobson <joel@compiler.org> wrote: I think if we combine the ON KEY ... TO ... part of my idea, with your idea, we have a complete neat solution.Maybe we can make them a little more similar syntax wise though.Could you accept \"ON KEY\" instead of \"FOREIGN KEY\" for your idea?And would a simple dot work instead of ->?I’m not fixed on the details; writing FOREIGN KEY just felt natural, and I copied the -> from the earlier messages, but I didn’t really mean to promote those specific syntax elements.One question to consider: which columns get included in the join and under what names? When we join USING there is just one copy of each column in the USING, not one from each source table. This is one of the nicest features of USING. With this new feature it seems like it might make sense to omit the join fields from the added table; tricky bit is they don't necessarily have the same name as existing fields as must be the case with USING.", "msg_date": "Sun, 26 Dec 2021 22:39:45 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 22:38, Joel Jacobson wrote:\n> FROM permission p\n> LEFT JOIN ON KEY p.role r\n> LEFT JOIN team_role tr ON KEY role TO r\n> LEFT JOIN ON KEY tr.team t\n> LEFT JOIN user_role ur ON KEY role TO r\n> LEFT JOIN ON KEY ur.user u\n\nHm, might be problematic to reuse dot operator, I think it would be controversial.\n\nPerhaps this would be more SQL idiomatic:\n\nFROM permission p\n LEFT JOIN ON KEY role IN p AS r\n LEFT JOIN team_role AS tr ON KEY role TO r\n LEFT JOIN ON KEY team IN tr AS t\n LEFT JOIN user_role AS ur ON KEY role TO r\n LEFT JOIN ON KEY user IN ur AS u\n\n/Joel\nOn Sun, Dec 26, 2021, at 22:38, Joel Jacobson wrote:> FROM permission p>   LEFT JOIN ON KEY p.role r>   LEFT JOIN team_role tr ON KEY role TO r>   LEFT JOIN ON KEY tr.team t>   LEFT JOIN user_role ur ON KEY role TO r>   LEFT JOIN ON KEY ur.user uHm, might be problematic to reuse dot operator, I think it would be controversial.Perhaps this would be more SQL idiomatic:FROM permission p   LEFT JOIN ON KEY role IN p AS r   LEFT JOIN team_role AS tr ON KEY role TO r   LEFT JOIN ON KEY team IN tr AS t   LEFT JOIN user_role AS ur ON KEY role TO r   LEFT JOIN ON KEY user IN ur AS u/Joel", "msg_date": "Sun, 26 Dec 2021 22:51:30 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": ">\n>\n>\n> Perhaps this would be more SQL idiomatic:\n>\n> FROM permission p\n> LEFT JOIN ON KEY role IN p AS r\n> LEFT JOIN team_role AS tr ON KEY role TO r\n> LEFT JOIN ON KEY team IN tr AS t\n> LEFT JOIN user_role AS ur ON KEY role TO r\n> LEFT JOIN ON KEY user IN ur AS u\n>\n>\nMy second guess would be:\n\nFROM permission p\nLEFT JOIN role AS r ON [FOREIGN] KEY [(p.col1 [, p.col2 ...])]\n\n\nwhere the key spec is only required when there are multiple foreign keys in\npermission pointing to role.\n\nBut my first guess would be that the standards group won't get around to it.\n\nPerhaps this would be more SQL idiomatic:FROM permission p   LEFT JOIN ON KEY role IN p AS r   LEFT JOIN team_role AS tr ON KEY role TO r   LEFT JOIN ON KEY team IN tr AS t   LEFT JOIN user_role AS ur ON KEY role TO r   LEFT JOIN ON KEY user IN ur AS uMy second guess would be:FROM permission pLEFT JOIN role AS r ON [FOREIGN] KEY [(p.col1 [, p.col2 ...])]where the key spec is only required when there are multiple foreign keys in permission pointing to role.But my first guess would be that the standards group won't get around to it.", "msg_date": "Sun, 26 Dec 2021 17:25:05 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Sun, Dec 26, 2021, at 23:25, Corey Huinker wrote:\n> My second guess would be:\n> FROM permission p\n> LEFT JOIN role AS r ON [FOREIGN] KEY [(p.col1 [, p.col2 ...])]\n>\n> where the key spec is only required when there are multiple foreign keys in permission pointing to role.\n>\n> But my first guess would be that the standards group won't get around to it.\n\nIt's a quite nice idea. It would definitively mean an improvement, compared to today's SQL.\n\nBenefits:\n* Ability to assert the join is actually performed on foreign key columns.\n* Conciser thanks to not always having to specify all key columns on both sides.\n\nHowever, I see one problem with leaving out the key columns:\nFirst, there is only one FK in permission pointing to role, and we write a query leaving out the key columns.\nThen, another different FK in permission pointing to role is later added, and our old query is suddenly in trouble.\n\n/Joel\nOn Sun, Dec 26, 2021, at 23:25, Corey Huinker wrote:> My second guess would be:> FROM permission p> LEFT JOIN role AS r ON [FOREIGN] KEY [(p.col1 [, p.col2 ...])]>> where the key spec is only required when there are multiple foreign keys in permission pointing to role.>> But my first guess would be that the standards group won't get around to it.It's a quite nice idea. It would definitively mean an improvement, compared to today's SQL.Benefits:* Ability to assert the join is actually performed on foreign key columns.* Conciser thanks to not always having to specify all key columns on both sides.However, I see one problem with leaving out the key columns:First, there is only one FK in permission pointing to role, and we write a query leaving out the key columns.Then, another different FK in permission pointing to role is later added, and our old query is suddenly in trouble./Joel", "msg_date": "Mon, 27 Dec 2021 09:22:22 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Mon, 27 Dec 2021 at 03:22, Joel Jacobson <joel@compiler.org> wrote:\n\n\n> However, I see one problem with leaving out the key columns:\n> First, there is only one FK in permission pointing to role, and we write a\n> query leaving out the key columns.\n> Then, another different FK in permission pointing to role is later added,\n> and our old query is suddenly in trouble.\n>\n\nI thought the proposal was to give the FK constraint name. However, if the\nidea now is to allow leaving that out also if there is only one FK, then\nthat's also OK as long as people understand it can break in the same way\nNATURAL JOIN can break when columns are added later. For that matter, a\njoin mentioning column names can break if the columns are changed. But\nbreakage where the query no longer compiles are better than ones where it\nsuddenly means something very different so overall I wouldn't worry about\nthis too much.\n\nOn Mon, 27 Dec 2021 at 03:22, Joel Jacobson <joel@compiler.org> wrote: However, I see one problem with leaving out the key columns:First, there is only one FK in permission pointing to role, and we write a query leaving out the key columns.Then, another different FK in permission pointing to role is later added, and our old query is suddenly in trouble.I thought the proposal was to give the FK constraint name. However, if the idea now is to allow leaving that out also if there is only one FK, then that's also OK as long as people understand it can break in the same way NATURAL JOIN can break when columns are added later. For that matter, a join mentioning column names can break if the columns are changed. But breakage where the query no longer compiles are better than ones where it suddenly means something very different so overall I wouldn't worry about this too much.", "msg_date": "Mon, 27 Dec 2021 09:48:41 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": ">On Mon, Dec 27, 2021, at 15:48, Isaac Morland wrote:\n>I thought the proposal was to give the FK constraint name.\n>However, if the idea now is to allow leaving that out also if there \n>is only one FK, then that's also OK as long as people understand it can break in the same way NATURAL JOIN can break \n>when columns are added later. For that matter, a join mentioning column names can break if the columns are changed. But \n>breakage where the query no longer compiles are better than ones where it suddenly means something very different so \n>overall I wouldn't worry about this too much.\n\nYes, my proposal was indeed to give the FK constraint name.\nI just commented on Corey's different proposal that instead specified FK columns.\nI agree with your reasoning regarding the trade-offs and problems with such a proposal.\n \nI still see more benefits in using the FK constraint name though.\n\nI have made some new progress on the idea since last proposal:\n\nSYNTAX\n\njoin_type JOIN KEY referencing_alias.fk_name [ [ AS ] alias ]\n\njoin_type table_name [ [ AS ] alias ] KEY fk_name REF referenced_alias\n\nEXAMPLE\n\nFROM permission p\nLEFT JOIN KEY p.role r\nLEFT JOIN team_role tr KEY role REF r\nLEFT JOIN KEY tr.team t\nLEFT JOIN user_role ur KEY role REF r\nLEFT JOIN KEY ur.user u\nWHERE p.id = 1;\n\nForeign key constraint names have been given the same names as the referenced tables.\n\nThoughts?\n\n/Joel\n>On Mon, Dec 27, 2021, at 15:48, Isaac Morland wrote:>I thought the proposal was to give the FK constraint name.>However, if the idea now is to allow leaving that out also if there >is only one FK, then that's also OK as long as people understand it can break in the same way NATURAL JOIN can break >when columns are added later. For that matter, a join mentioning column names can break if the columns are changed. But >breakage where the query no longer compiles are better than ones where it suddenly means something very different so >overall I wouldn't worry about this too much.Yes, my proposal was indeed to give the FK constraint name.I just commented on Corey's different proposal that instead specified FK columns.I agree with your reasoning regarding the trade-offs and problems with such a proposal. I still see more benefits in using the FK constraint name though.I have made some new progress on the idea since last proposal:SYNTAXjoin_type JOIN KEY referencing_alias.fk_name [ [ AS ] alias ]join_type table_name [ [ AS ] alias ] KEY fk_name REF referenced_aliasEXAMPLEFROM permission pLEFT JOIN KEY p.role rLEFT JOIN team_role tr KEY role REF rLEFT JOIN KEY tr.team tLEFT JOIN user_role ur KEY role REF rLEFT JOIN KEY ur.user uWHERE p.id = 1;Foreign key constraint names have been given the same names as the referenced tables.Thoughts?/Joel", "msg_date": "Mon, 27 Dec 2021 16:20:33 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "Joel Jacobson <joel@compiler.org> schrieb am Mo., 27. Dez. 2021, 16:21:\n\n> >On Mon, Dec 27, 2021, at 15:48, Isaac Morland wrote:\n> >I thought the proposal was to give the FK constraint name.\n> >However, if the idea now is to allow leaving that out also if there\n> >is only one FK, then that's also OK as long as people understand it can\n> break in the same way NATURAL JOIN can break\n> >when columns are added later. For that matter, a join mentioning column\n> names can break if the columns are changed. But\n> >breakage where the query no longer compiles are better than ones where it\n> suddenly means something very different so\n> >overall I wouldn't worry about this too much.\n>\n> Yes, my proposal was indeed to give the FK constraint name.\n> I just commented on Corey's different proposal that instead specified FK\n> columns.\n> I agree with your reasoning regarding the trade-offs and problems with\n> such a proposal.\n>\n> I still see more benefits in using the FK constraint name though.\n>\n> I have made some new progress on the idea since last proposal:\n>\n> SYNTAX\n>\n> join_type JOIN KEY referencing_alias.fk_name [ [ AS ] alias ]\n>\n> join_type table_name [ [ AS ] alias ] KEY fk_name REF referenced_alias\n>\n> EXAMPLE\n>\n> FROM permission p\n> LEFT JOIN KEY p.role r\n> LEFT JOIN team_role tr KEY role REF r\n> LEFT JOIN KEY tr.team t\n> LEFT JOIN user_role ur KEY role REF r\n> LEFT JOIN KEY ur.user u\n> WHERE p.id = 1;\n>\n\n\nRef = in and to, great\n\n>\n> Foreign key constraint names have been given the same names as the\n> referenced tables.\n>\n> Thoughts?\n>\n> /Joel\n>\n\nJoel Jacobson <joel@compiler.org> schrieb am Mo., 27. Dez. 2021, 16:21:>On Mon, Dec 27, 2021, at 15:48, Isaac Morland wrote:>I thought the proposal was to give the FK constraint name.>However, if the idea now is to allow leaving that out also if there >is only one FK, then that's also OK as long as people understand it can break in the same way NATURAL JOIN can break >when columns are added later. For that matter, a join mentioning column names can break if the columns are changed. But >breakage where the query no longer compiles are better than ones where it suddenly means something very different so >overall I wouldn't worry about this too much.Yes, my proposal was indeed to give the FK constraint name.I just commented on Corey's different proposal that instead specified FK columns.I agree with your reasoning regarding the trade-offs and problems with such a proposal. I still see more benefits in using the FK constraint name though.I have made some new progress on the idea since last proposal:SYNTAXjoin_type JOIN KEY referencing_alias.fk_name [ [ AS ] alias ]join_type table_name [ [ AS ] alias ] KEY fk_name REF referenced_aliasEXAMPLEFROM permission pLEFT JOIN KEY p.role rLEFT JOIN team_role tr KEY role REF rLEFT JOIN KEY tr.team tLEFT JOIN user_role ur KEY role REF rLEFT JOIN KEY ur.user uWHERE p.id = 1;Ref = in and to, greatForeign key constraint names have been given the same names as the referenced tables.Thoughts?/Joel", "msg_date": "Mon, 27 Dec 2021 16:28:43 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Mon, 27 Dec 2021 at 10:20, Joel Jacobson <joel@compiler.org> wrote:\n\n\n> Foreign key constraint names have been given the same names as the\n> referenced tables.\n>\n\nWhile I agree this could be a simple approach in many real cases for having\neasy to understand FK constraint names, I wonder if for illustration and\nexplaining the feature if it might work better to use names that are\ncompletely unique so that it's crystal clear that the names are constraint\nnames, not table names.\n\nOn Mon, 27 Dec 2021 at 10:20, Joel Jacobson <joel@compiler.org> wrote: Foreign key constraint names have been given the same names as the referenced tables.While I agree this could be a simple approach in many real cases for having easy to understand FK constraint names, I wonder if for illustration and explaining the feature if it might work better to use names that are completely unique so that it's crystal clear that the names are constraint names, not table names.", "msg_date": "Mon, 27 Dec 2021 11:03:43 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Mon, Dec 27, 2021, at 17:03, Isaac Morland wrote:\n> On Mon, 27 Dec 2021 at 10:20, Joel Jacobson <joel@compiler.org> wrote:\n> \n> Foreign key constraint names have been given the same names as the referenced tables.\n>\n> While I agree this could be a simple approach in many real cases for having easy to understand FK constraint names, I \n> wonder if for illustration and explaining the feature if it might work better to use names that are completely unique so that \n> it's crystal clear that the names are constraint names, not table names.\n\nGood point, I agree. New version below:\n\nFROM permission p\nLEFT JOIN KEY p.permission_role_id_fkey r\nLEFT JOIN team_role tr KEY team_role_role_id_fkey REF r\nLEFT JOIN KEY tr.team_role_team_id_fkey t\nLEFT JOIN user_role ur KEY user_role_role_id_fkey REF r\nLEFT JOIN KEY ur.user_role_user_id_fkey u\nWHERE p.id = 1;\n\nThoughts?\n\n/Joel\nOn Mon, Dec 27, 2021, at 17:03, Isaac Morland wrote:> On Mon, 27 Dec 2021 at 10:20, Joel Jacobson <joel@compiler.org> wrote:> > Foreign key constraint names have been given the same names as the referenced tables.>> While I agree this could be a simple approach in many real cases for having easy to understand FK constraint names, I > wonder if for illustration and explaining the feature if it might work better to use names that are completely unique so that > it's crystal clear that the names are constraint names, not table names.Good point, I agree. New version below:FROM permission pLEFT JOIN KEY p.permission_role_id_fkey rLEFT JOIN team_role tr KEY team_role_role_id_fkey REF rLEFT JOIN KEY tr.team_role_team_id_fkey tLEFT JOIN user_role ur KEY user_role_role_id_fkey REF rLEFT JOIN KEY ur.user_role_user_id_fkey uWHERE p.id = 1;Thoughts?/Joel", "msg_date": "Mon, 27 Dec 2021 17:39:08 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": ">\n>\n> First, there is only one FK in permission pointing to role, and we write a\n> query leaving out the key columns.\n> Then, another different FK in permission pointing to role is later added,\n> and our old query is suddenly in trouble.\n>\n>\nWe already have that problem with cases where two tables have a common x\ncolumn:\n\nSELECT x FROM a, b\n\n\nso this would be on-brand for the standards body. And worst case scenario\nyou're just back to the situation you have now.\n\nFirst, there is only one FK in permission pointing to role, and we write a query leaving out the key columns.Then, another different FK in permission pointing to role is later added, and our old query is suddenly in trouble.We already have that problem with cases where two tables have a common x column:SELECT x FROM a, bso this would be on-brand for the standards body. And worst case scenario you're just back to the situation you have now.", "msg_date": "Mon, 27 Dec 2021 12:22:52 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Mon, 27 Dec 2021 at 03:22, Joel Jacobson <joel@compiler.org> wrote:\n>> However, I see one problem with leaving out the key columns:\n>> First, there is only one FK in permission pointing to role, and we write a\n>> query leaving out the key columns.\n>> Then, another different FK in permission pointing to role is later added,\n>> and our old query is suddenly in trouble.\n\n> I thought the proposal was to give the FK constraint name. However, if the\n> idea now is to allow leaving that out also if there is only one FK, then\n> that's also OK as long as people understand it can break in the same way\n> NATURAL JOIN can break when columns are added later.\n\nNATURAL JOIN is widely regarded as a foot-gun that the SQL committee\nshould never have invented. Why would we want to create another one?\n\n(I suspect that making the constraint name optional would be problematic\nfor reasons of syntax ambiguity, anyway.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Dec 2021 13:15:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Mon, Dec 27, 2021, at 19:15, Tom Lane wrote:\n> NATURAL JOIN is widely regarded as a foot-gun that the SQL committee\n> should never have invented. Why would we want to create another one?\n>\n> (I suspect that making the constraint name optional would be problematic\n> for reasons of syntax ambiguity, anyway.)\n\nI agree. I remember this blog post from 2013 discussing the problems\nwith both NATURAL but also the problems with USING:\nhttp://www.databasesoup.com/2013/08/fancy-sql-monday-on-vs-natural-join-vs.html\n\nSince my last email in this thread, I've learned KEY is unfortunately not a reserved keyword.\nThis probably means the proposed \"JOIN KEY\" would be problematic, since a relation could be named KEY.\n\nCan with think of some other suitable reserved keyword?\n\nHow about JOIN WITH?\n\njoin_type JOIN WITH fk_table.fk_name [ [ AS ] alias ]\njoin_type JOIN fk_table [ [ AS ] alias ] WITH fk_name REF pk_table\n\nFROM permission p\nLEFT JOIN WITH p.permission_role_id_fkey r\nLEFT JOIN team_role tr WITH team_role_role_id_fkey REF r\nLEFT JOIN WITH tr.team_role_team_id_fkey t\nLEFT JOIN user_role ur WITH user_role_role_id_fkey REF r\nLEFT JOIN WITH ur.user_role_user_id_fkey u\nWHERE p.id = 1;\n\n/Joel\nOn Mon, Dec 27, 2021, at 19:15, Tom Lane wrote:> NATURAL JOIN is widely regarded as a foot-gun that the SQL committee> should never have invented.  Why would we want to create another one?>> (I suspect that making the constraint name optional would be problematic> for reasons of syntax ambiguity, anyway.)I agree. I remember this blog post from 2013 discussing the problemswith both NATURAL but also the problems with USING:http://www.databasesoup.com/2013/08/fancy-sql-monday-on-vs-natural-join-vs.htmlSince my last email in this thread, I've learned KEY is unfortunately not a reserved keyword.This probably means the proposed \"JOIN KEY\" would be problematic, since a relation could be named KEY.Can with think of some other suitable reserved keyword?How about JOIN WITH?join_type JOIN WITH fk_table.fk_name [ [ AS ] alias ]join_type JOIN fk_table [ [ AS ] alias ] WITH fk_name REF pk_tableFROM permission pLEFT JOIN WITH p.permission_role_id_fkey rLEFT JOIN team_role tr WITH team_role_role_id_fkey REF rLEFT JOIN WITH tr.team_role_team_id_fkey tLEFT JOIN user_role ur WITH user_role_role_id_fkey REF rLEFT JOIN WITH ur.user_role_user_id_fkey uWHERE p.id = 1;/Joel", "msg_date": "Tue, 28 Dec 2021 20:26:36 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Since my last email in this thread, I've learned KEY is unfortunately not a reserved keyword.\n> This probably means the proposed \"JOIN KEY\" would be problematic, since a relation could be named KEY.\n\n> Can with think of some other suitable reserved keyword?\n\nFOREIGN? Or even spell out \"JOIN FOREIGN KEY\".\n\n> How about JOIN WITH?\n\nSeems largely unrelated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 14:41:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On 12/28/21 8:26 PM, Joel Jacobson wrote:\n> On Mon, Dec 27, 2021, at 19:15, Tom Lane wrote:\n>> NATURAL JOIN is widely regarded as a foot-gun that the SQL committee\n>> should never have invented. Why would we want to create another one?\n>>\n>> (I suspect that making the constraint name optional would be problematic\n>> for reasons of syntax ambiguity, anyway.)\n> \n> I agree. I remember this blog post from 2013 discussing the problems\n> with both NATURAL but also the problems with USING:\n> http://www.databasesoup.com/2013/08/fancy-sql-monday-on-vs-natural-join-vs.html\n> \n> Since my last email in this thread, I've learned KEY is unfortunately not a reserved keyword.\n> This probably means the proposed \"JOIN KEY\" would be problematic, since a relation could be named KEY.\n> \n> Can with think of some other suitable reserved keyword?\n\nI don't particularly like this whole idea anyway, but if we're going to\nhave it, I would suggest\n\n JOIN ... USING KEY ...\n\nsince USING currently requires a parenthesized list, that shouldn't\ncreate any ambiguity.\n\n> How about JOIN WITH?\n\nWITH is severely overloaded already.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 28 Dec 2021 20:45:12 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "> How about JOIN WITH?\nI'm -1 on this, reusing WITH is just likely to cause confusion because WITH\ncan appear other places in a query having an entirely different meaning.\nI'd just avoid that from the start.\n\n>> Can with think of some other suitable reserved keyword?\n>FOREIGN? Or even spell out \"JOIN FOREIGN KEY\".\nI like the conciseness of just FOREIGN.\n\n > How about JOIN WITH?I'm -1 on this, reusing WITH is just likely to cause confusion because WITH can appear other places in a query having an entirely different meaning. I'd just avoid that from the start.>> Can with think of some other suitable reserved keyword?>FOREIGN?  Or even spell out \"JOIN FOREIGN KEY\".I like the conciseness of just FOREIGN.", "msg_date": "Tue, 28 Dec 2021 14:47:19 -0500", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On 2021-Dec-27, Joel Jacobson wrote:\n\n> >On Mon, Dec 27, 2021, at 15:48, Isaac Morland wrote:\n> >I thought the proposal was to give the FK constraint name.\n> >However, if the idea now is to allow leaving that out also if there \n> >is only one FK, then that's also OK as long as people understand it can break in the same way NATURAL JOIN can break \n> >when columns are added later. For that matter, a join mentioning column names can break if the columns are changed. But \n> >breakage where the query no longer compiles are better than ones where it suddenly means something very different so \n> >overall I wouldn't worry about this too much.\n> \n> Yes, my proposal was indeed to give the FK constraint name.\n> I just commented on Corey's different proposal that instead specified FK columns.\n\nBy way of precedent we have the ON CONFLICT clause, for which you can\nspecify a constraint name or a list of columns.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n", "msg_date": "Tue, 28 Dec 2021 16:56:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 12/28/21 8:26 PM, Joel Jacobson wrote:\n>> Can with think of some other suitable reserved keyword?\n\n> I don't particularly like this whole idea anyway, but if we're going to\n> have it, I would suggest\n\n> JOIN ... USING KEY ...\n\nThat would read well, which is nice, but I wonder if it wouldn't induce\nconfusion. You'd have to explain that it didn't work like standard\nUSING in the sense of merging the join-key columns.\n\n... unless, of course, we wanted to make it do so. Would that\nbe sane? Which name (referenced or referencing column) would\nthe merged column have?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 15:10:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On 28.12.21 20:45, Vik Fearing wrote:\n> I don't particularly like this whole idea anyway, but if we're going to\n> have it, I would suggest\n> \n> JOIN ... USING KEY ...\n> \n> since USING currently requires a parenthesized list, that shouldn't\n> create any ambiguity.\n\nIn the 1990s, there were some SQL drafts that included syntax like\n\nJOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...\n\nAFAICT, these ideas just faded away because of other priorities, so if \nsomeone wants to revive it, some work already exists.\n\n\n\n", "msg_date": "Wed, 29 Dec 2021 10:46:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "\nOn 12/28/21 15:10, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 12/28/21 8:26 PM, Joel Jacobson wrote:\n>>> Can with think of some other suitable reserved keyword?\n>> I don't particularly like this whole idea anyway, but if we're going to\n>> have it, I would suggest\n>> JOIN ... USING KEY ...\n> That would read well, which is nice, but I wonder if it wouldn't induce\n> confusion. You'd have to explain that it didn't work like standard\n> USING in the sense of merging the join-key columns.\n>\n> ... unless, of course, we wanted to make it do so. Would that\n> be sane? Which name (referenced or referencing column) would\n> the merged column have?\n>\n> \t\t\t\n\n\n\nI agree this would cause confusion. I think your earlier suggestion of\n\n\n   JOIN ... FOREIGN KEY ...\n\n\nseems reasonable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Dec 2021 10:16:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> In the 1990s, there were some SQL drafts that included syntax like\n> JOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...\n> AFAICT, these ideas just faded away because of other priorities, so if \n> someone wants to revive it, some work already exists.\n\nInteresting! One thing that bothered me about this whole line of\ndiscussion is that we could get blindsided in future by the SQL\ncommittee standardizing the same idea with slightly different\nsyntax/semantics. I think borrowing this draft text would greatly\nimprove the odds of that not happening. Do you have access to\nfull details?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 10:28:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Wed, Dec 29, 2021, at 16:28, Tom Lane wrote:\n>Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> In the 1990s, there were some SQL drafts that included syntax like\n>> JOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...\n>> AFAICT, these ideas just faded away because of other priorities, so if \n>> someone wants to revive it, some work already exists.\n>\n> Interesting! One thing that bothered me about this whole line of\n> discussion is that we could get blindsided in future by the SQL\n> committee standardizing the same idea with slightly different\n> syntax/semantics. I think borrowing this draft text would greatly\n> improve the odds of that not happening. Do you have access to\n> full details?\n\nWisely said, I agree.\n\nI have access to the ISO online document database, but the oldest SQL documents I could find there are from 2008-10-15.\nI searched for document titles containing \"SQL\" in both ISO/IEC JTC 1/SC 32 and ISO/IEC JTC 1/SC 32/WG 3.\n\nIt would be very interesting to read these old SQL drafts from the 1990s, if they can be found.\n\n/Joel\nOn Wed, Dec 29, 2021, at 16:28, Tom Lane wrote:>Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:>> In the 1990s, there were some SQL drafts that included syntax like>> JOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...>> AFAICT, these ideas just faded away because of other priorities, so if >> someone wants to revive it, some work already exists.>> Interesting!  One thing that bothered me about this whole line of> discussion is that we could get blindsided in future by the SQL> committee standardizing the same idea with slightly different> syntax/semantics.  I think borrowing this draft text would greatly> improve the odds of that not happening.  Do you have access to> full details?Wisely said, I agree.I have access to the ISO online document database, but the oldest SQL documents I could find there are from 2008-10-15.I searched for document titles containing \"SQL\" in both ISO/IEC JTC 1/SC 32 and ISO/IEC JTC 1/SC 32/WG 3.It would be very interesting to read these old SQL drafts from the 1990s, if they can be found./Joel", "msg_date": "Thu, 30 Dec 2021 11:56:02 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Wed, Dec 29, 2021, at 16:28, Tom Lane wrote:\n>Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> In the 1990s, there were some SQL drafts that included syntax like\n>> JOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...\n>> AFAICT, these ideas just faded away because of other priorities, so if \n>> someone wants to revive it, some work already exists.\n>\n> Interesting! One thing that bothered me about this whole line of\n> discussion is that we could get blindsided in future by the SQL\n> committee standardizing the same idea with slightly different\n> syntax/semantics. I think borrowing this draft text would greatly\n> improve the odds of that not happening. Do you have access to\n> full details?\n\nI read an interesting comment where someone claimed the SQL standard would never\nallow using constraint names in a DQL statements. [1]\n\nI responded there already good examples of this in some vendors,\nsuch as PostgreSQL's INSERT INTO ... ON CONFLICT.\n(Thanks Alvaro, your reply previously in this thread reminded me of this case.)\n\nI later learned the DQL sublanguage apparently doesn't include INSERT,\nbut nonetheless I still think it's a good example on the potential value\nof using constraint names in queries.\n\nDoes anyone know if there is any such general clause in the SQL standard,\nthat would forbid using constraint names in SELECT queries?\n\n/Joel\n\n[1] https://news.ycombinator.com/item?id=29739147#29743102\nOn Wed, Dec 29, 2021, at 16:28, Tom Lane wrote:>Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:>> In the 1990s, there were some SQL drafts that included syntax like>> JOIN ... USING PRIMARY KEY | USING FOREIGN KEY | USING CONSTRAINT ...>> AFAICT, these ideas just faded away because of other priorities, so if >> someone wants to revive it, some work already exists.>> Interesting!  One thing that bothered me about this whole line of> discussion is that we could get blindsided in future by the SQL> committee standardizing the same idea with slightly different> syntax/semantics.  I think borrowing this draft text would greatly> improve the odds of that not happening.  Do you have access to> full details?I read an interesting comment where someone claimed the SQL standard would neverallow using constraint names in a DQL statements. [1]I responded there already good examples of this in some vendors,such as PostgreSQL's INSERT INTO ... ON CONFLICT.(Thanks Alvaro, your reply previously in this thread reminded me of this case.)I later learned the DQL sublanguage apparently doesn't include INSERT,but nonetheless I still think it's a good example on the potential valueof using constraint names in queries.Does anyone know if there is any such general clause in the SQL standard,that would forbid using constraint names in SELECT queries?/Joel[1] https://news.ycombinator.com/item?id=29739147#29743102", "msg_date": "Sat, 01 Jan 2022 10:53:44 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" }, { "msg_contents": "On Wed, Dec 29, 2021, at 10:46, Peter Eisentraut wrote:\n>In the 1990s, there were some SQL drafts that included syntax like\n\nDo you remember if it was in the beginning/middle/end of the 1990s?\nI will start the work of going through all drafts tomorrow.\n\n/Joel\nOn Wed, Dec 29, 2021, at 10:46, Peter Eisentraut wrote:>In the 1990s, there were some SQL drafts that included syntax likeDo you remember if it was in the beginning/middle/end of the 1990s?I will start the work of going through all drafts tomorrow./Joel", "msg_date": "Mon, 03 Jan 2022 15:39:36 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Foreign key joins revisited" } ]
[ { "msg_contents": "Hi!\n\nWorking on pluggable toaster (mostly, for JSONB improvements, see links \nbelow) I had found that STORAGE attribute on column is impossible to set \n in CREATE TABLE command but COMPRESS option is possible. It looks \nunreasonable. Suggested patch implements this possibility.\n\n[1] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfnyc-2021.pdf\n[2] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgvision-2021.pdf\n[3] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n[4] http://www.sai.msu.su/~megera/postgres/talks/bytea-pgconfonline-2021.pdf\n\nPS I will propose pluggable toaster patch a bit later\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/", "msg_date": "Mon, 27 Dec 2021 10:51:42 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "HI\n\nFor patch create_table_storage-v1\n\n1 \n+ALTER opt_column ColId SET STORAGE name\n\n+opt_column_storage:\n+\t\t\tSTORAGE\tColId\t\t\t\t\t\t\t{ $$ = $2; }\n\nAre they both set to name or ColId? Although they are the same.\n\n2 For ColumnDef new member storage_name, did you miss the function _copyColumnDef() _equalColumnDef()?\n\n\nRegards\nWenjing\n\n\n> 2021年12月27日 15:51,Teodor Sigaev <teodor@sigaev.ru> 写道:\n> \n> Hi!\n> \n> Working on pluggable toaster (mostly, for JSONB improvements, see links below) I had found that STORAGE attribute on column is impossible to set in CREATE TABLE command but COMPRESS option is possible. It looks unreasonable. Suggested patch implements this possibility.\n> \n> [1] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfnyc-2021.pdf\n> [2] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgvision-2021.pdf\n> [3] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n> [4] http://www.sai.msu.su/~megera/postgres/talks/bytea-pgconfonline-2021.pdf\n> \n> PS I will propose pluggable toaster patch a bit later\n> -- \n> Teodor Sigaev E-mail: teodor@sigaev.ru\n> WWW: http://www.sigaev.ru/<create_table_storage-v1.patch.gz>\n\n\n\n", "msg_date": "Fri, 21 Jan 2022 15:40:57 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi!\n\n> Are they both set to name or ColId? Although they are the same.\n> \n\nThank you, fixed, that was just an oversight.\n\n> 2 For ColumnDef new member storage_name, did you miss the function _copyColumnDef() _equalColumnDef()?\n\nThank you, fixed\n\n> \n> \n> Regards\n> Wenjing\n> \n> \n>> 2021年12月27日 15:51,Teodor Sigaev <teodor@sigaev.ru> 写道:\n>>\n>> Hi!\n>>\n>> Working on pluggable toaster (mostly, for JSONB improvements, see links below) I had found that STORAGE attribute on column is impossible to set in CREATE TABLE command but COMPRESS option is possible. It looks unreasonable. Suggested patch implements this possibility.\n>>\n>> [1] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfnyc-2021.pdf\n>> [2] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgvision-2021.pdf\n>> [3] http://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconfonline-2021.pdf\n>> [4] http://www.sai.msu.su/~megera/postgres/talks/bytea-pgconfonline-2021.pdf\n>>\n>> PS I will propose pluggable toaster patch a bit later\n>> -- \n>> Teodor Sigaev E-mail: teodor@sigaev.ru\n>> WWW: http://www.sigaev.ru/<create_table_storage-v1.patch.gz>\n> \n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/", "msg_date": "Wed, 2 Feb 2022 13:13:29 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On Wed, 2 Feb 2022 at 11:13, Teodor Sigaev <teodor@sigaev.ru> wrote:\n>\n> Hi!\n>\n> > Are they both set to name or ColId? Although they are the same.\n> >\n>\n> Thank you, fixed, that was just an oversight.\n>\n> > 2 For ColumnDef new member storage_name, did you miss the function _copyColumnDef() _equalColumnDef()?\n>\n> Thank you, fixed\n\nI noticed this and tried it out after needing it in a different\nthread, so this is quite the useful addition.\n\nI see that COMPRESSION and STORAGE now are handled slightly\ndifferently in the grammar. Maybe we could standardize that a bit\nmore; so that we have only one `STORAGE [kind]` definition in the\ngrammar?\n\nAs I'm new to the grammar files; would you know the difference between\n`name` and `ColId`, and why you would change from one to the other in\nALTER COLUMN STORAGE?\n\nThanks!\n\n-Matthias\n\n\n", "msg_date": "Tue, 29 Mar 2022 22:28:45 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi hackers,\n\nI noticed that cfbot is not entirely happy with the patch, so I rebased it.\n\n> I see that COMPRESSION and STORAGE now are handled slightly\n> differently in the grammar. Maybe we could standardize that a bit\n> more; so that we have only one `STORAGE [kind]` definition in the\n> grammar?\n>\n> As I'm new to the grammar files; would you know the difference between\n> `name` and `ColId`, and why you would change from one to the other in\n> ALTER COLUMN STORAGE?\n\nGood point, Matthias. I addressed this in 0002. Does it look better now?\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 15 Jun 2022 17:51:06 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On Wed, 15 Jun 2022 at 16:51, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> I noticed that cfbot is not entirely happy with the patch, so I rebased it.\n>\n> > I see that COMPRESSION and STORAGE now are handled slightly\n> > differently in the grammar. Maybe we could standardize that a bit\n> > more; so that we have only one `STORAGE [kind]` definition in the\n> > grammar?\n> >\n> > As I'm new to the grammar files; would you know the difference between\n> > `name` and `ColId`, and why you would change from one to the other in\n> > ALTER COLUMN STORAGE?\n>\n> Good point, Matthias. I addressed this in 0002. Does it look better now?\n\nWhen updating a patchset generally we try to keep the patches\nself-contained, and update patches as opposed to adding incremental\npatches to the set.\n\nApart from this comment on the format of the patch, the result seems solid.\n\n- Matthias\n\n\n", "msg_date": "Thu, 16 Jun 2022 11:17:01 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi Matthias,\n\n> Apart from this comment on the format of the patch, the result seems solid.\n\nMany thanks.\n\n> When updating a patchset generally we try to keep the patches\n> self-contained, and update patches as opposed to adding incremental\n> patches to the set.\n\nMy reasoning was to separate my changes from the ones originally\nproposed by Teodor. After doing `git am` locally a reviewer can see\nthem separately, or together with `git diff origin/master`, whatever\nhe or she prefers. The committer can choose between committing two\npatches ony by one, or rebasing them to a single commit.\n\nI will avoid the \"patch for the patch\" practice from now on. Sorry for\nthe inconvenience.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 16 Jun 2022 16:40:55 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Thanks! I have been annoyed sometimes by the lack of this feature.\n\nAt Thu, 16 Jun 2022 16:40:55 +0300, Aleksander Alekseev <aleksander@timescale.com> wrote in \n> Hi Matthias,\n> \n> > Apart from this comment on the format of the patch, the result seems solid.\n> \n> Many thanks.\n> \n> > When updating a patchset generally we try to keep the patches\n> > self-contained, and update patches as opposed to adding incremental\n> > patches to the set.\n> \n> My reasoning was to separate my changes from the ones originally\n> proposed by Teodor. After doing `git am` locally a reviewer can see\n> them separately, or together with `git diff origin/master`, whatever\n> he or she prefers. The committer can choose between committing two\n> patches ony by one, or rebasing them to a single commit.\n> \n> I will avoid the \"patch for the patch\" practice from now on. Sorry for\n> the inconvenience.\n\n0001 contains one tranling whitespace error. (which \"git diff --check\"\ncan detect)\n\nThe modified doc line gets too long to me. Maybe we should wrap it as\ndone in other lines of the same page.\n\nI think we should avoid descriptions dead-copied between pages. In\nthis case, I think we should remove the duplicate part of the\ndescription of ALTER TABLE then replace with something like \"See\nCREATE TABLE for details\".\n\nAs the result of copying-in the description, SET-STORAGE and\nCOMPRESSION in the page of CREATE-TABLE use different articles in the\nsame context.\n\n> SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }\n> This form sets the storage mode for *a* column.\n\n> COMPRESSION compression_method\n> The COMPRESSION clause sets the compression method for *the* column.\n\nFWIW I feel \"the\" is better here, but anyway we should unify them.\n\n \n static char GetAttributeCompression(Oid atttypid, char *compression);\n+static char\tGetAttributeStorage(const char *storagemode);\n\nThe whitespace after \"char\" is TAB which differs from SPC used in\nneigbouring lines.\n\nIn the grammar, COMPRESSION uses ColId, but STORAGE uses name. It\nseems to me the STORAGE is correct here, though.. (So, do we need to\nfix COMPRESSION syntax?)\n\n\nThis adds support for \"ADD COLUMN SET STORAGE\" but it is not described\nin the doc. COMPRESSION is not described, too. Shouldn't we add the\nboth this time? Or the fix for COMPRESSION can be a different patch.\n\nNow that we have three column options COMPRESSION, COLLATE and STORGE\nwhich has the strict order in syntax. I wonder it can be relaxed but\nit might be too much..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Jun 2022 11:45:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On 15.06.22 16:51, Aleksander Alekseev wrote:\n> I noticed that cfbot is not entirely happy with the patch, so I rebased it.\n> \n>> I see that COMPRESSION and STORAGE now are handled slightly\n>> differently in the grammar. Maybe we could standardize that a bit\n>> more; so that we have only one `STORAGE [kind]` definition in the\n>> grammar?\n>>\n>> As I'm new to the grammar files; would you know the difference between\n>> `name` and `ColId`, and why you would change from one to the other in\n>> ALTER COLUMN STORAGE?\n> \n> Good point, Matthias. I addressed this in 0002. Does it look better now?\n\nIn your patch, the documentation for CREATE TABLE says \"SET STORAGE\", \nbut the actual syntax does not contain \"SET\".\n\n\n\n", "msg_date": "Wed, 22 Jun 2022 15:25:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On 29.03.22 22:28, Matthias van de Meent wrote:\n> As I'm new to the grammar files; would you know the difference between\n> `name` and `ColId`, and why you would change from one to the other in\n> ALTER COLUMN STORAGE?\n\nThe grammar says\n\nname: ColId { $$ = $1; };\n\nso it doesn't matter technically.\n\nIt seems we are using \"name\" mostly for names of objects, so I wouldn't \nuse it here for storage or compression types.\n\n\n", "msg_date": "Wed, 22 Jun 2022 15:29:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi hackers,\n\nMany thanks for the review!\n\nHere is a patch updated according to all the recent feedback, except\nfor two suggestions:\n\n> This adds support for \"ADD COLUMN SET STORAGE\" but it is not described\n> in the doc. COMPRESSION is not described, too. Shouldn't we add the\n> both this time? Or the fix for COMPRESSION can be a different patch.\n\nThe documentation for ADD COLUMN simply says:\n\n```\n <para>\n This form adds a new column to the table, using the same syntax as\n <link linkend=\"sql-createtable\"><command>CREATE\nTABLE</command></link>. If <literal>IF NOT EXISTS</literal>\n is specified and a column already exists with this name,\n no error is thrown.\n </para>\n```\n\nI suggest keeping a reference to CREATE TABLE, similarly as it was\ndone for ALTER COLUMN.\n\n> Now that we have three column options COMPRESSION, COLLATE and STORGE\n> which has the strict order in syntax. I wonder it can be relaxed but\n> it might be too much..\n\nAgree, this could be a bit too much for this particular discussion.\nAlthough this shouldn't be a difficult change, and I agree that this\nshould be useful, personally I don't feel enthusiastic enough to\ndeliver it right now. I suggest we address this later.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 24 Jun 2022 14:44:07 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi hackers,\n\n> Here is a patch updated according to all the recent feedback, except\n> for two suggestions:\n\nIn v4 I forgot to list possible arguments for STORAGE in\nalter_table.sgml, similarly as it is done for other subcommands. Here\nis a corrected patch.\n\n- <literal>SET STORAGE</literal>\n+ <literal>SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }</literal>\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 24 Jun 2022 15:30:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi hackers,\n\n> > Here is a patch updated according to all the recent feedback, except\n> > for two suggestions:\n>\n> In v4 I forgot to list possible arguments for STORAGE in\n> alter_table.sgml, similarly as it is done for other subcommands. Here\n> is a corrected patch.\n\nHere is the rebased patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 11 Jul 2022 12:27:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On 11.07.22 11:27, Aleksander Alekseev wrote:\n>>> Here is a patch updated according to all the recent feedback, except\n>>> for two suggestions:\n>>\n>> In v4 I forgot to list possible arguments for STORAGE in\n>> alter_table.sgml, similarly as it is done for other subcommands. Here\n>> is a corrected patch.\n> \n> Here is the rebased patch.\n\nThe \"safety check: do not allow toasted storage modes unless column \ndatatype is TOAST-aware\" could be moved into GetAttributeStorage(), so \nit doesn't have to be repeated. (Note that GetAttributeCompression() \ndoes similar checking.)\n\nATExecSetStorage() currently doesn't do any such check, and your patch \nisn't adding one. Is there a reason for that?\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:17:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi Peter,\n\n> The \"safety check: do not allow toasted storage modes unless column\n> datatype is TOAST-aware\" could be moved into GetAttributeStorage(), so\n> it doesn't have to be repeated. (Note that GetAttributeCompression()\n> does similar checking.)\n\nGood point. Fixed.\n\n> ATExecSetStorage() currently doesn't do any such check, and your patch\n> isn't adding one. Is there a reason for that?\n\nATExecSetStorage() does this, but the check is a bit below [1]. In v7\nI moved the check to GetAttributeStorage() as well.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/tablecmds.c#l8312\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 12 Jul 2022 13:10:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "On 12.07.22 12:10, Aleksander Alekseev wrote:\n> Hi Peter,\n> \n>> The \"safety check: do not allow toasted storage modes unless column\n>> datatype is TOAST-aware\" could be moved into GetAttributeStorage(), so\n>> it doesn't have to be repeated. (Note that GetAttributeCompression()\n>> does similar checking.)\n> \n> Good point. Fixed.\n> \n>> ATExecSetStorage() currently doesn't do any such check, and your patch\n>> isn't adding one. Is there a reason for that?\n> \n> ATExecSetStorage() does this, but the check is a bit below [1]. In v7\n> I moved the check to GetAttributeStorage() as well.\n> \n> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/tablecmds.c#l8312\n\nCommitted.\n\nI thought the removal of the documentation details of SET COMPRESSION \nand SET STORAGE from the ALTER TABLE ref page was a bit excessive, since \nthat material actually contained useful information about what happens \nwhen you change compression or storage on a table with existing data. \nSo I left that in. Maybe there is room to deduplicate that material a \nbit, but it would need to be more fine-grained than just removing one \nside of it.\n\n\n\n", "msg_date": "Wed, 13 Jul 2022 12:28:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" }, { "msg_contents": "Hi,\n\n> Committed.\n>\n> I thought the removal of the documentation details of SET COMPRESSION\n> and SET STORAGE from the ALTER TABLE ref page was a bit excessive, since\n> that material actually contained useful information about what happens\n> when you change compression or storage on a table with existing data.\n> So I left that in. Maybe there is room to deduplicate that material a\n> bit, but it would need to be more fine-grained than just removing one\n> side of it.\n\nThanks, Peter!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 13 Jul 2022 13:38:05 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE ( .. STORAGE ..)" } ]
[ { "msg_contents": "This patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc. This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes. This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.", "msg_date": "Mon, 27 Dec 2021 10:02:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Add Boolean node" }, { "msg_contents": "po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n>\n> This patch adds a new node type Boolean, to go alongside the \"value\"\n> nodes Integer, Float, String, etc. This seems appropriate given that\n> Boolean values are a fundamental part of the system and are used a lot.\n>\n> Before, SQL-level Boolean constants were represented by a string with\n> a cast, and internal Boolean values in DDL commands were usually\n> represented by Integer nodes. This takes the place of both of these\n> uses, making the intent clearer and having some amount of type safety.\n\n\n+1\n\nRegards\n\nPavel\n\npo 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 10:08:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Can that boolean node be cultural dependent validation for the value? By\nthe developer? By all?\n\nPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n10:09:\n\n>\n>\n> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n>>\n>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>> nodes Integer, Float, String, etc. This seems appropriate given that\n>> Boolean values are a fundamental part of the system and are used a lot.\n>>\n>> Before, SQL-level Boolean constants were represented by a string with\n>> a cast, and internal Boolean values in DDL commands were usually\n>> represented by Integer nodes. This takes the place of both of these\n>> uses, making the intent clearer and having some amount of type safety.\n>\n>\n> +1\n>\n> Regards\n>\n> Pavel\n>\n>\n\nCan that boolean node be cultural dependent validation for the value? By the developer? By all?Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 11:08:42 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n> Can that boolean node be cultural dependent validation for the value? By\n> the developer? By all?\n>\n\nwhy?\n\nThe boolean node is not a boolean type.\n\nThis is an internal feature. There should not be any cultural dependency\n\nRegards\n\nPavel\n\n\n> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n> 10:09:\n>\n>>\n>>\n>> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n>> peter.eisentraut@enterprisedb.com> napsal:\n>>\n>>>\n>>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>>> nodes Integer, Float, String, etc. This seems appropriate given that\n>>> Boolean values are a fundamental part of the system and are used a lot.\n>>>\n>>> Before, SQL-level Boolean constants were represented by a string with\n>>> a cast, and internal Boolean values in DDL commands were usually\n>>> represented by Integer nodes. This takes the place of both of these\n>>> uses, making the intent clearer and having some amount of type safety.\n>>\n>>\n>> +1\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n\npo 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Can that boolean node be cultural dependent validation for the value? By the developer? By all?why?The boolean node is not a boolean type.This is an internal feature. There should not be any cultural dependencyRegardsPavelPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 11:15:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On Mon, Dec 27, 2021 at 5:09 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\n>>\n>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>> nodes Integer, Float, String, etc. This seems appropriate given that\n>> Boolean values are a fundamental part of the system and are used a lot.\n>>\n>> Before, SQL-level Boolean constants were represented by a string with\n>> a cast, and internal Boolean values in DDL commands were usually\n>> represented by Integer nodes. This takes the place of both of these\n>> uses, making the intent clearer and having some amount of type safety.\n>\n> +1\n\n+1 too, looks like a good improvement. The patch looks good to me,\nalthough it's missing comment updates for at least nodeTokenType() and\nnodeRead().\n\n\n", "msg_date": "Mon, 27 Dec 2021 18:18:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "You think, all values are valid. Is a higher german order valid for Turkey,\nthat only know baskets, as a Form of order. For me not all forms of all are\nvalid for all. You cannot Export or Import food that You dislike, because\nit would hurt you. Do you have dishes that you dislike? Is all valid for\nyou and your culture.\n\nIt is ok that this is an internal feature, that is not cultural dependent.\nIwanted to give you my Interpretation of this Feature. It is ok It doesn't\nfit 😉\n\nPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n11:15:\n\n>\n>\n> po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n> napsal:\n>\n>> Can that boolean node be cultural dependent validation for the value? By\n>> the developer? By all?\n>>\n>\n> why?\n>\n> The boolean node is not a boolean type.\n>\n> This is an internal feature. There should not be any cultural dependency\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>> 10:09:\n>>\n>>>\n>>>\n>>> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n>>> peter.eisentraut@enterprisedb.com> napsal:\n>>>\n>>>>\n>>>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>>>> nodes Integer, Float, String, etc. This seems appropriate given that\n>>>> Boolean values are a fundamental part of the system and are used a lot.\n>>>>\n>>>> Before, SQL-level Boolean constants were represented by a string with\n>>>> a cast, and internal Boolean values in DDL commands were usually\n>>>> represented by Integer nodes. This takes the place of both of these\n>>>> uses, making the intent clearer and having some amount of type safety.\n>>>\n>>>\n>>> +1\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n\nYou think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:15:po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Can that boolean node be cultural dependent validation for the value? By the developer? By all?why?The boolean node is not a boolean type.This is an internal feature. There should not be any cultural dependencyRegardsPavelPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 11:24:00 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Hi\n\npo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n> You think, all values are valid. Is a higher german order valid for\n> Turkey, that only know baskets, as a Form of order. For me not all forms of\n> all are valid for all. You cannot Export or Import food that You dislike,\n> because it would hurt you. Do you have dishes that you dislike? Is all\n> valid for you and your culture.\n>\n> It is ok that this is an internal feature, that is not cultural dependent.\n> Iwanted to give you my Interpretation of this Feature. It is ok It doesn't\n> fit 😉\n>\n\nPlease, don't use top posting mode in this mailing list\nhttps://en.wikipedia.org/wiki/Posting_style#Top-posting\n\nThis is an internal feature - Node structures are not visible from SQL\nlevel. And internal features will be faster and less complex, if we don't\nneed to implement cultural dependency there. So False is just only false,\nand not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n\nOn a custom level it is a different situation. Although I am not sure if it\nis a good idea to implement local dependency for boolean type. In Czech\nlanguage we have two related words for \"false\" - \"lez\" and \"nepravda\". And\nnothing is used in IT. But we use Czech (German) format date (and\neverywhere in code ISO format should be preferred), and we use czech\nsorting. In internal things less complexity is better (higher complexity\nmeans lower safety) . On a custom level, anybody can do what they like.\n\nRegards\n\nPavel\n\n\n>\n> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n> 11:15:\n>\n>>\n>>\n>> po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>> napsal:\n>>\n>>> Can that boolean node be cultural dependent validation for the value? By\n>>> the developer? By all?\n>>>\n>>\n>> why?\n>>\n>> The boolean node is not a boolean type.\n>>\n>> This is an internal feature. There should not be any cultural dependency\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>> 10:09:\n>>>\n>>>>\n>>>>\n>>>> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n>>>> peter.eisentraut@enterprisedb.com> napsal:\n>>>>\n>>>>>\n>>>>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>>>>> nodes Integer, Float, String, etc. This seems appropriate given that\n>>>>> Boolean values are a fundamental part of the system and are used a lot.\n>>>>>\n>>>>> Before, SQL-level Boolean constants were represented by a string with\n>>>>> a cast, and internal Boolean values in DDL commands were usually\n>>>>> represented by Integer nodes. This takes the place of both of these\n>>>>> uses, making the intent clearer and having some amount of type safety.\n>>>>\n>>>>\n>>>> +1\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>>\n\nHipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingThis is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format should be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.    RegardsPavel Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:15:po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Can that boolean node be cultural dependent validation for the value? By the developer? By all?why?The boolean node is not a boolean type.This is an internal feature. There should not be any cultural dependencyRegardsPavelPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 11:48:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n11:49:\n\n> Hi\n>\n> po 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n> napsal:\n>\n>> You think, all values are valid. Is a higher german order valid for\n>> Turkey, that only know baskets, as a Form of order. For me not all forms of\n>> all are valid for all. You cannot Export or Import food that You dislike,\n>> because it would hurt you. Do you have dishes that you dislike? Is all\n>> valid for you and your culture.\n>>\n>> It is ok that this is an internal feature, that is not cultural\n>> dependent. Iwanted to give you my Interpretation of this Feature. It is ok\n>> It doesn't fit 😉\n>>\n>\n> Please, don't use top posting mode in this mailing list\n> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n>\n\nI will read and learn on that. Thanks for the hint.\n\n\n> This is an internal feature - Node structures are not visible from SQL\n> level. And internal features will be faster and less complex, if we don't\n> need to implement cultural dependency there. So False is just only false,\n> and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n>\n> On a custom level it is a different situation. Although I am not sure if\n> it is a good idea to implement local dependency for boolean type. In Czech\n> language we have two related words for \"false\" - \"lez\" and \"nepravda\". And\n> nothing is used in IT. But we use Czech (German) format date (and\n> everywhere in code ISO format should be preferred), and we use czech\n> sorting. In internal things less complexity is better (higher complexity\n> means lower safety) . On a custom level, anybody can do what they like.\n>\n\nI agree on that from a german point of view. This is great structure on a\nfirst guess.\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>> 11:15:\n>>\n>>>\n>>>\n>>> po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>>> napsal:\n>>>\n>>>> Can that boolean node be cultural dependent validation for the value?\n>>>> By the developer? By all?\n>>>>\n>>>\n>>> why?\n>>>\n>>> The boolean node is not a boolean type.\n>>>\n>>> This is an internal feature. There should not be any cultural dependency\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>>> 10:09:\n>>>>\n>>>>>\n>>>>>\n>>>>> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n>>>>> peter.eisentraut@enterprisedb.com> napsal:\n>>>>>\n>>>>>>\n>>>>>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>>>>>> nodes Integer, Float, String, etc. This seems appropriate given that\n>>>>>> Boolean values are a fundamental part of the system and are used a\n>>>>>> lot.\n>>>>>>\n>>>>>> Before, SQL-level Boolean constants were represented by a string with\n>>>>>> a cast, and internal Boolean values in DDL commands were usually\n>>>>>> represented by Integer nodes. This takes the place of both of these\n>>>>>> uses, making the intent clearer and having some amount of type safety.\n>>>>>\n>>>>>\n>>>>> +1\n>>>>>\n>>>>> Regards\n>>>>>\n>>>>> Pavel\n>>>>>\n>>>>>\n\nPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:49:Hipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingI will read and learn on that. Thanks for the hint.This is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format should be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.    I agree on that from a german point of view. This is great structure on a first guess.RegardsPavel Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:15:po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Can that boolean node be cultural dependent validation for the value? By the developer? By all?why?The boolean node is not a boolean type.This is an internal feature. There should not be any cultural dependencyRegardsPavelPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 12:13:57 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:\n\n>\n>\n> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n> 11:49:\n>\n>> Hi\n>>\n>> po 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>> napsal:\n>>\n>>> You think, all values are valid. Is a higher german order valid for\n>>> Turkey, that only know baskets, as a Form of order. For me not all forms of\n>>> all are valid for all. You cannot Export or Import food that You dislike,\n>>> because it would hurt you. Do you have dishes that you dislike? Is all\n>>> valid for you and your culture.\n>>>\n>>> It is ok that this is an internal feature, that is not cultural\n>>> dependent. Iwanted to give you my Interpretation of this Feature. It is ok\n>>> It doesn't fit 😉\n>>>\n>>\n>> Please, don't use top posting mode in this mailing list\n>> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n>>\n>\n> I will read and learn on that. Thanks for the hint.\n>\n>\n>> This is an internal feature - Node structures are not visible from SQL\n>> level. And internal features will be faster and less complex, if we don't\n>> need to implement cultural dependency there. So False is just only false,\n>> and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n>>\n>> On a custom level it is a different situation. Although I am not sure if\n>> it is a good idea to implement local dependency for boolean type. In Czech\n>> language we have two related words for \"false\" - \"lez\" and \"nepravda\". And\n>> nothing is used in IT. But we use Czech (German) format date (and\n>> everywhere in code ISO format shou lld be preferred), and we use czech\n>> sorting. In internal things less complexity is better (higher complexity\n>> means lower safety) . On a custom level, anybody can do what they like.\n>>\n>\nIf you See databases as a tree, buche like books, the stem is internal,\nless complexity, strong and safe. The custom level are the bows and leafs.\nEver leaf gets the ingredients it likes, but all are of the same type.\n\n\n>>\n>\n> I agree on that from a german point of view. This is great structure on a\n> first guess.\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>> 11:15:\n>>>\n>>>>\n>>>>\n>>>> po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>>>> napsal:\n>>>>\n>>>>> Can that boolean node be cultural dependent validation for the value?\n>>>>> By the developer? By all?\n>>>>>\n>>>>\n>>>> why?\n>>>>\n>>>> The boolean node is not a boolean type.\n>>>>\n>>>> This is an internal feature. There should not be any cultural dependency\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>>\n>>>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez.\n>>>>> 2021, 10:09:\n>>>>>\n>>>>>>\n>>>>>>\n>>>>>> po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <\n>>>>>> peter.eisentraut@enterprisedb.com> napsal:\n>>>>>>\n>>>>>>>\n>>>>>>> This patch adds a new node type Boolean, to go alongside the \"value\"\n>>>>>>> nodes Integer, Float, String, etc. This seems appropriate given\n>>>>>>> that\n>>>>>>> Boolean values are a fundamental part of the system and are used a\n>>>>>>> lot.\n>>>>>>>\n>>>>>>> Before, SQL-level Boolean constants were represented by a string with\n>>>>>>> a cast, and internal Boolean values in DDL commands were usually\n>>>>>>> represented by Integer nodes. This takes the place of both of these\n>>>>>>> uses, making the intent clearer and having some amount of type\n>>>>>>> safety.\n>>>>>>\n>>>>>>\n>>>>>> +1\n>>>>>>\n>>>>>> Regards\n>>>>>>\n>>>>>> Pavel\n>>>>>>\n>>>>>>\n\nSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:49:Hipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingI will read and learn on that. Thanks for the hint.This is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format shou lld be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.If you See databases as a tree, buche like books, the stem is internal, less complexity, strong and safe. The custom level are the bows and leafs. Ever leaf gets the ingredients it likes, but all are of the same type.    I agree on that from a german point of view. This is great structure on a first guess.RegardsPavel Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:15:po 27. 12. 2021 v 11:08 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Can that boolean node be cultural dependent validation for the value? By the developer? By all?why?The boolean node is not a boolean type.This is an internal feature. There should not be any cultural dependencyRegardsPavelPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 10:09:po 27. 12. 2021 v 10:02 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nThis patch adds a new node type Boolean, to go alongside the \"value\" \nnodes Integer, Float, String, etc.  This seems appropriate given that \nBoolean values are a fundamental part of the system and are used a lot.\n\nBefore, SQL-level Boolean constants were represented by a string with\na cast, and internal Boolean values in DDL commands were usually \nrepresented by Integer nodes.  This takes the place of both of these \nuses, making the intent clearer and having some amount of type safety.+1RegardsPavel", "msg_date": "Mon, 27 Dec 2021 12:23:10 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "po 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n>\n>\n> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:\n>\n>>\n>>\n>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>> 11:49:\n>>\n>>> Hi\n>>>\n>>> po 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>>> napsal:\n>>>\n>>>> You think, all values are valid. Is a higher german order valid for\n>>>> Turkey, that only know baskets, as a Form of order. For me not all forms of\n>>>> all are valid for all. You cannot Export or Import food that You dislike,\n>>>> because it would hurt you. Do you have dishes that you dislike? Is all\n>>>> valid for you and your culture.\n>>>>\n>>>> It is ok that this is an internal feature, that is not cultural\n>>>> dependent. Iwanted to give you my Interpretation of this Feature. It is ok\n>>>> It doesn't fit 😉\n>>>>\n>>>\n>>> Please, don't use top posting mode in this mailing list\n>>> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n>>>\n>>\n>> I will read and learn on that. Thanks for the hint.\n>>\n>>\n>>> This is an internal feature - Node structures are not visible from SQL\n>>> level. And internal features will be faster and less complex, if we don't\n>>> need to implement cultural dependency there. So False is just only false,\n>>> and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n>>>\n>>> On a custom level it is a different situation. Although I am not sure if\n>>> it is a good idea to implement local dependency for boolean type. In Czech\n>>> language we have two related words for \"false\" - \"lez\" and \"nepravda\". And\n>>> nothing is used in IT. But we use Czech (German) format date (and\n>>> everywhere in code ISO format shou lld be preferred), and we use czech\n>>> sorting. In internal things less complexity is better (higher complexity\n>>> means lower safety) . On a custom level, anybody can do what they like.\n>>>\n>>\n> If you See databases as a tree, buche like books, the stem is internal,\n> less complexity, strong and safe. The custom level are the bows and leafs.\n> Ever leaf gets the ingredients it likes, but all are of the same type.\n>\n\nagain - Node type is not equal to data type.\n\nRegards\n\nPavel\n\npo 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:49:Hipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingI will read and learn on that. Thanks for the hint.This is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format shou lld be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.If you See databases as a tree, buche like books, the stem is internal, less complexity, strong and safe. The custom level are the bows and leafs. Ever leaf gets the ingredients it likes, but all are of the same type.again - Node type is not equal to data type.RegardsPavel", "msg_date": "Mon, 27 Dec 2021 12:27:23 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n12:28:\n\n>\n>\n> po 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n> napsal:\n>\n>>\n>>\n>> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:\n>>\n>>>\n>>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>> 11:49:\n>>>\n>>>> Hi\n>>>>\n>>>> po 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>>>> napsal:\n>>>>\n>>>>> You think, all values are valid. Is a higher german order valid for\n>>>>> Turkey, that only know baskets, as a Form of order. For me not all forms of\n>>>>> all are valid for all. You cannot Export or Import food that You dislike,\n>>>>> because it would hurt you. Do you have dishes that you dislike? Is all\n>>>>> valid for you and your culture.\n>>>>>\n>>>>> It is ok that this is an internal feature, that is not cultural\n>>>>> dependent. Iwanted to give you my Interpretation of this Feature. It is ok\n>>>>> It doesn't fit 😉\n>>>>>\n>>>>\n>>>> Please, don't use top posting mode in this mailing list\n>>>> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n>>>>\n>>>\n>>> I will read and learn on that. Thanks for the hint.\n>>>\n>>>\n>>>> This is an internal feature - Node structures are not visible from SQL\n>>>> level. And internal features will be faster and less complex, if we don't\n>>>> need to implement cultural dependency there. So False is just only false,\n>>>> and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n>>>>\n>>>> On a custom level it is a different situation. Although I am not sure\n>>>> if it is a good idea to implement local dependency for boolean type. In\n>>>> Czech language we have two related words for \"false\" - \"lez\" and\n>>>> \"nepravda\". And nothing is used in IT. But we use Czech (German) format\n>>>> date (and everywhere in code ISO format shou lld be preferred), and we use\n>>>> czech sorting. In internal things less complexity is better (higher\n>>>> complexity means lower safety) . On a custom level, anybody can do what\n>>>> they like.\n>>>>\n>>>\n>> If you See databases as a tree, buche like books, the stem is internal,\n>> less complexity, strong and safe. The custom level are the bows and leafs.\n>> Ever leaf gets the ingredients it likes, but all are of the same type.\n>>\n>\n> again - Node type is not equal to data type.\n>\n\nDid you know that different culture have different trees. You read that.\nThe Chinese Bonsai reflects Chinese Société, as well as the german buche\nreflects Verwaltung\n\nThanks for the separation of node and data. If you consider keys, ie.\nIndexes trees, keys and nodes can be easily the same, in a simulation.\nThanks for your view.\n\n>\n> Regards\n>\n> Pavel\n>\n>\n\nPavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:28:po 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:49:Hipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingI will read and learn on that. Thanks for the hint.This is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format shou lld be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.If you See databases as a tree, buche like books, the stem is internal, less complexity, strong and safe. The custom level are the bows and leafs. Ever leaf gets the ingredients it likes, but all are of the same type.again - Node type is not equal to data type.Did you know that different culture have different trees. You read that. The Chinese Bonsai reflects Chinese Société, as well as the german buche reflects VerwaltungThanks for the separation of node and data. If you consider keys, ie. Indexes trees, keys and nodes can be easily the same, in a simulation. Thanks for your view.RegardsPavel", "msg_date": "Mon, 27 Dec 2021 13:05:40 +0100", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "po 27. 12. 2021 v 13:05 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\nnapsal:\n\n>\n>\n> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n> 12:28:\n>\n>>\n>>\n>> po 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>> 12:13:\n>>>\n>>>>\n>>>>\n>>>> Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021,\n>>>> 11:49:\n>>>>\n>>>>> Hi\n>>>>>\n>>>>> po 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com>\n>>>>> napsal:\n>>>>>\n>>>>>> You think, all values are valid. Is a higher german order valid for\n>>>>>> Turkey, that only know baskets, as a Form of order. For me not all forms of\n>>>>>> all are valid for all. You cannot Export or Import food that You dislike,\n>>>>>> because it would hurt you. Do you have dishes that you dislike? Is all\n>>>>>> valid for you and your culture.\n>>>>>>\n>>>>>> It is ok that this is an internal feature, that is not cultural\n>>>>>> dependent. Iwanted to give you my Interpretation of this Feature. It is ok\n>>>>>> It doesn't fit 😉\n>>>>>>\n>>>>>\n>>>>> Please, don't use top posting mode in this mailing list\n>>>>> https://en.wikipedia.org/wiki/Posting_style#Top-posting\n>>>>>\n>>>>\n>>>> I will read and learn on that. Thanks for the hint.\n>>>>\n>>>>\n>>>>> This is an internal feature - Node structures are not visible from SQL\n>>>>> level. And internal features will be faster and less complex, if we don't\n>>>>> need to implement cultural dependency there. So False is just only false,\n>>>>> and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other.\n>>>>>\n>>>>> On a custom level it is a different situation. Although I am not sure\n>>>>> if it is a good idea to implement local dependency for boolean type. In\n>>>>> Czech language we have two related words for \"false\" - \"lez\" and\n>>>>> \"nepravda\". And nothing is used in IT. But we use Czech (German) format\n>>>>> date (and everywhere in code ISO format shou lld be preferred), and we use\n>>>>> czech sorting. In internal things less complexity is better (higher\n>>>>> complexity means lower safety) . On a custom level, anybody can do what\n>>>>> they like.\n>>>>>\n>>>>\n>>> If you See databases as a tree, buche like books, the stem is internal,\n>>> less complexity, strong and safe. The custom level are the bows and leafs.\n>>> Ever leaf gets the ingredients it likes, but all are of the same type.\n>>>\n>>\n>> again - Node type is not equal to data type.\n>>\n>\n> Did you know that different culture have different trees. You read that.\n> The Chinese Bonsai reflects Chinese Société, as well as the german buche\n> reflects Verwaltung\n>\n> Thanks for the separation of node and data. If you consider keys, ie.\n> Indexes trees, keys and nodes can be easily the same, in a simulation.\n> Thanks for your view.\n>\n\nlook at Postgres source code , please.\nhttps://github.com/postgres/postgres/tree/master/src/backend/nodes. In this\ncase nodes have no relation to the index's tree.\n\nRegards\n\nPavel\n\n\n\n>> Regards\n>>\n>> Pavel\n>>\n>>\n\npo 27. 12. 2021 v 13:05 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:28:po 27. 12. 2021 v 12:23 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Dez. 2021, 12:13:Pavel Stehule <pavel.stehule@gmail.com> schrieb am Mo., 27. Dez. 2021, 11:49:Hipo 27. 12. 2021 v 11:24 odesílatel Sascha Kuhl <yogidabanli@gmail.com> napsal:You think, all values are valid. Is a higher german order valid for Turkey, that only know baskets, as a Form of order. For me not all forms of all are valid for all. You cannot Export or Import food that You dislike, because it would hurt you. Do you have dishes that you dislike? Is all valid for you and your culture.It is ok that this is an internal feature, that is not cultural dependent. Iwanted to give you my Interpretation of this Feature. It is ok It doesn't fit 😉Please, don't use top posting mode in this mailing list https://en.wikipedia.org/wiki/Posting_style#Top-postingI will read and learn on that. Thanks for the hint.This is an internal feature - Node structures are not visible from SQL level. And internal features will be faster and less complex, if we don't need to implement cultural dependency there. So False is just only false, and not \"false\" or \"lez\" or \"nepravda\" or \"Marchen\" any other. On a custom level it is a different situation. Although I am not sure if it is a good idea to implement local dependency for boolean type. In Czech language we have two related words for \"false\" - \"lez\" and \"nepravda\". And nothing is used in IT. But we use Czech (German) format date (and everywhere in code ISO format shou lld be preferred), and we use czech sorting. In internal things less complexity is better (higher complexity means lower safety) . On a custom level, anybody can do what they like.If you See databases as a tree, buche like books, the stem is internal, less complexity, strong and safe. The custom level are the bows and leafs. Ever leaf gets the ingredients it likes, but all are of the same type.again - Node type is not equal to data type.Did you know that different culture have different trees. You read that. The Chinese Bonsai reflects Chinese Société, as well as the german buche reflects VerwaltungThanks for the separation of node and data. If you consider keys, ie. Indexes trees, keys and nodes can be easily the same, in a simulation. Thanks for your view.look at Postgres source code , please. https://github.com/postgres/postgres/tree/master/src/backend/nodes. In this case nodes have no relation to the index's tree.RegardsPavel RegardsPavel", "msg_date": "Mon, 27 Dec 2021 13:30:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "That looks like a good change. I wonder what motivates that now? Why\nwasn't it added when the usages grew? Are there more Boolean usages\nplanned?\n\nI ask because this code change will affect ability to automatically\ncherry-pick some of the patches.\n\ndefGetBoolean() - please update the comment in the default to case to\nmention defGetString along with opt_boolean_or_string production.\nReading the existing code in that function, one would wonder why to\nuse true and false over say on and off. But true/false seems a natural\nchoice. So that's fine.\n\ndefGetBoolean() and nodeRead() could use a common function to parse a\nboolean string. The code in nodeRead() seems to assume that any string\nstarting with \"t\" will represent value true. Is that right?\n\nWe are using literal constants \"true\"/\"false\" at many places. This\npatch adds another one. I am wondering whether it makes sense to add\n#define TRUE_STR, FALSE_STR and use it everywhere for consistency and\ncorrectness.\n\nFor the sake of consistency (again :)), we should have a function to\nreturn string representation of a Boolean node and use it in both\ndefGetString and _outBoolean().\n\nAre the expected output changes like below necessary? Might affect\nbackward compatibility for applications.\n-bool\n-----\n-t\n+?column?\n+--------\n+t\n\nOn Mon, Dec 27, 2021 at 2:32 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n>\n> This patch adds a new node type Boolean, to go alongside the \"value\"\n> nodes Integer, Float, String, etc. This seems appropriate given that\n> Boolean values are a fundamental part of the system and are used a lot.\n>\n> Before, SQL-level Boolean constants were represented by a string with\n> a cast, and internal Boolean values in DDL commands were usually\n> represented by Integer nodes. This takes the place of both of these\n> uses, making the intent clearer and having some amount of type safety.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 27 Dec 2021 18:45:08 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> That looks like a good change. I wonder what motivates that now? Why\n> wasn't it added when the usages grew?\n\nYou'd have to find some of the original Berkeley people to get an\nanswer for that. Possibly it's got something to do with the fact\nthat C didn't have a separate bool type back then ... or, going\neven further back, that LISP didn't either. In any case, it seems\nlike a plausible improvement now.\n\nDidn't really read the patch in any detail, but I did have one idea:\nI think that the different things-that-used-to-be-Value-nodes ought to\nuse different field names, say ival, rval, bval, sval not just \"val\".\nThat makes it more likely that you'd catch any code that is doing the\nwrong thing and not going through one of the access macros.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Dec 2021 09:53:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 2021-Dec-27, Peter Eisentraut wrote:\n\n> This patch adds a new node type Boolean, to go alongside the \"value\" nodes\n> Integer, Float, String, etc. This seems appropriate given that Boolean\n> values are a fundamental part of the system and are used a lot.\n\nI like the idea. I'm surprised that there is no notational savings in\nthe patch, however.\n\n> diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out\n> index 3a4fd45147..e0c4bee893 100644\n> --- a/src/test/regress/expected/create_function_3.out\n> +++ b/src/test/regress/expected/create_function_3.out\n> @@ -403,7 +403,7 @@ SELECT pg_get_functiondef('functest_S_13'::regproc);\n> LANGUAGE sql +\n> BEGIN ATOMIC +\n> SELECT 1; +\n> - SELECT false AS bool; +\n> + SELECT false; +\n> END +\n\nHmm, interesting side-effect: we no longer assign a column name in this\ncase so it remains \"?column?\", just like it happens for other datatypes.\nThis seems okay to me. (This is also what causes the changes in the\nisolationtester expected output.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n", "msg_date": "Mon, 27 Dec 2021 12:10:12 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 27.12.21 14:15, Ashutosh Bapat wrote:\n> That looks like a good change. I wonder what motivates that now? Why\n> wasn't it added when the usages grew? Are there more Boolean usages\n> planned?\n\nMainly, I was looking at Integer/makeInteger() and noticed that most \nuses of those weren't actually integers but booleans. This change makes \nit clearer which is which.\n\n\n", "msg_date": "Tue, 28 Dec 2021 08:59:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "po 27. 12. 2021 v 16:10 odesílatel Alvaro Herrera\n<alvherre@alvh.no-ip.org> napsal:\n>\n> On 2021-Dec-27, Peter Eisentraut wrote:\n>\n> > This patch adds a new node type Boolean, to go alongside the \"value\" nodes\n> > Integer, Float, String, etc. This seems appropriate given that Boolean\n> > values are a fundamental part of the system and are used a lot.\n>\n> I like the idea. I'm surprised that there is no notational savings in\n> the patch, however.\n>\n> > diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out\n> > index 3a4fd45147..e0c4bee893 100644\n> > --- a/src/test/regress/expected/create_function_3.out\n> > +++ b/src/test/regress/expected/create_function_3.out\n> > @@ -403,7 +403,7 @@ SELECT pg_get_functiondef('functest_S_13'::regproc);\n> > LANGUAGE sql +\n> > BEGIN ATOMIC +\n> > SELECT 1; +\n> > - SELECT false AS bool; +\n> > + SELECT false; +\n> > END +\n>\n> Hmm, interesting side-effect: we no longer assign a column name in this\n> case so it remains \"?column?\", just like it happens for other datatypes.\n> This seems okay to me. (This is also what causes the changes in the\n> isolationtester expected output.)\n\nThis seems to be caused by a change of makeBoolAConst function. I was\nthinking for a while about the potential backward compatibility\nproblems, but I wasn't able to find any.\n\n> --\n> Álvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n> \"Ni aún el genio muy grande llegaría muy lejos\n> si tuviera que sacarlo todo de su propio interior\" (Goethe)\n>\n>\n\n\n", "msg_date": "Tue, 28 Dec 2021 09:53:06 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> po 27. 12. 2021 v 16:10 odesílatel Alvaro Herrera\n> <alvherre@alvh.no-ip.org> napsal:\n>> Hmm, interesting side-effect: we no longer assign a column name in this\n>> case so it remains \"?column?\", just like it happens for other datatypes.\n>> This seems okay to me. (This is also what causes the changes in the\n>> isolationtester expected output.)\n\n> This seems to be caused by a change of makeBoolAConst function. I was\n> thinking for a while about the potential backward compatibility\n> problems, but I wasn't able to find any.\n\nIn theory this could break some application that's expecting\n\"SELECT ..., true, ...\" to return a column name of \"bool\"\nrather than \"?column?\". The risk of that being a problem in\npractice seems rather low, though. It certainly seems like a\nwart that you get a type name for that but not for other sorts\nof literals such as 1 or 2.4, so I'm okay with the change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 10:51:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 2021-12-27 09:53:32 -0500, Tom Lane wrote:\n> Didn't really read the patch in any detail, but I did have one idea:\n> I think that the different things-that-used-to-be-Value-nodes ought to\n> use different field names, say ival, rval, bval, sval not just \"val\".\n> That makes it more likely that you'd catch any code that is doing the\n> wrong thing and not going through one of the access macros.\n\nIf we go around changing all these places, it might be worth to also change\nInteger to be a int64 instead of an int.\n\n\n", "msg_date": "Wed, 29 Dec 2021 12:32:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> If we go around changing all these places, it might be worth to also change\n> Integer to be a int64 instead of an int.\n\nMeh ... that would have some non-obvious consequences, I think,\nat least if you tried to make the grammar make use of the extra\nwidth (it'd change the type resolution behavior for integer-ish\nliterals). I think it's better to keep it as plain int.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 15:40:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Hi,\n\nOn 2021-12-27 10:02:14 +0100, Peter Eisentraut wrote:\n> This patch adds a new node type Boolean, to go alongside the \"value\" nodes\n> Integer, Float, String, etc. This seems appropriate given that Boolean\n> values are a fundamental part of the system and are used a lot.\n> \n> Before, SQL-level Boolean constants were represented by a string with\n> a cast, and internal Boolean values in DDL commands were usually represented\n> by Integer nodes. This takes the place of both of these uses, making the\n> intent clearer and having some amount of type safety.\n\nThis annoyed me plenty of times before, plus many.\n\n\n> From 4e1ef56b5443fa11d981eb6e407dfc7c244dc60e Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Mon, 27 Dec 2021 09:52:05 +0100\n> Subject: [PATCH v1] Add Boolean node\n> \n> Before, SQL-level boolean constants were represented by a string with\n> a cast, and internal Boolean values in DDL commands were usually\n> represented by Integer nodes. This takes the place of both of these\n> uses, making the intent clearer and having some amount of type safety.\n> ---\n> ...\n> 20 files changed, 210 insertions(+), 126 deletions(-)\n\nThis might be easier to review if there were one patch adding the Boolean\ntype, and then a separate one converting users?\n\n\n> diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c\n> index c47a05d10d..b7261a88d4 100644\n> --- a/src/backend/commands/tsearchcmds.c\n> +++ b/src/backend/commands/tsearchcmds.c\n> @@ -1742,6 +1742,15 @@ buildDefItem(const char *name, const char *val, bool was_quoted)\n> \t\t\treturn makeDefElem(pstrdup(name),\n> \t\t\t\t\t\t\t (Node *) makeFloat(pstrdup(val)),\n> \t\t\t\t\t\t\t -1);\n> +\n> +\t\tif (strcmp(val, \"true\") == 0)\n> +\t\t\treturn makeDefElem(pstrdup(name),\n> +\t\t\t\t\t\t\t (Node *) makeBoolean(true),\n> +\t\t\t\t\t\t\t -1);\n> +\t\tif (strcmp(val, \"false\") == 0)\n> +\t\t\treturn makeDefElem(pstrdup(name),\n> +\t\t\t\t\t\t\t (Node *) makeBoolean(false),\n> +\t\t\t\t\t\t\t -1);\n> \t}\n> \t/* Just make it a string */\n> \treturn makeDefElem(pstrdup(name),\n\nHm. defGetBoolean() interprets \"true\", \"false\", \"on\", \"off\" as booleans. ISTM\nwe shouldn't invent different behaviours for individual subsystems?\n\n\n> --- a/src/backend/nodes/outfuncs.c\n> +++ b/src/backend/nodes/outfuncs.c\n> @@ -3433,6 +3433,12 @@ _outFloat(StringInfo str, const Float *node)\n> \tappendStringInfoString(str, node->val);\n> }\n> \n> +static void\n> +_outBoolean(StringInfo str, const Boolean *node)\n> +{\n> +\tappendStringInfoString(str, node->val ? \"true\" : \"false\");\n> +}\n\nAny reason not to use 't' and 'f' instead? It seems unnecessary to bloat the\nnode output by the longer strings, and it makes parsing more expensive\ntoo:\n\n> --- a/src/backend/nodes/read.c\n> +++ b/src/backend/nodes/read.c\n> @@ -283,6 +283,8 @@ nodeTokenType(const char *token, int length)\n> \t\tretval = RIGHT_PAREN;\n> \telse if (*token == '{')\n> \t\tretval = LEFT_BRACE;\n> +\telse if (strcmp(token, \"true\") == 0 || strcmp(token, \"false\") == 0)\n> +\t\tretval = T_Boolean;\n> \telse if (*token == '\"' && length > 1 && token[length - 1] == '\"')\n> \t\tretval = T_String;\n> \telse if (*token == 'b')\n\nBefore this could be implemented as a jump table, not now it can't easily be\nanymore.\n\n\n> diff --git a/src/test/isolation/expected/ri-trigger.out b/src/test/isolation/expected/ri-trigger.out\n> index 842df80a90..db85618bef 100644\n> --- a/src/test/isolation/expected/ri-trigger.out\n> +++ b/src/test/isolation/expected/ri-trigger.out\n> @@ -4,9 +4,9 @@ starting permutation: wxry1 c1 r2 wyrx2 c2\n> step wxry1: INSERT INTO child (parent_id) VALUES (0);\n> step c1: COMMIT;\n> step r2: SELECT TRUE;\n> -bool\n> -----\n> -t \n> +?column?\n> +--------\n> +t \n> (1 row)\n\nThis doesn't seem great. It might be more consistent (\"SELECT 1\" doesn't end\nup with 'integer' as column name), but this still seems like an unnecessarily\nlarge user-visible change for an internal data-representation change?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Dec 2021 12:48:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 29.12.21 21:32, Andres Freund wrote:\n> On 2021-12-27 09:53:32 -0500, Tom Lane wrote:\n>> Didn't really read the patch in any detail, but I did have one idea:\n>> I think that the different things-that-used-to-be-Value-nodes ought to\n>> use different field names, say ival, rval, bval, sval not just \"val\".\n>> That makes it more likely that you'd catch any code that is doing the\n>> wrong thing and not going through one of the access macros.\n> \n> If we go around changing all these places, it might be worth to also change\n> Integer to be a int64 instead of an int.\n\nI was actually looking into that, when I realized that most uses of \nInteger were actually Booleans. Hence the current patch to clear those \nfake Integers out of the way. I haven't gotten to analyze the int64 \nquestion any further, but it should be easier hereafter.\n\n\n", "msg_date": "Thu, 30 Dec 2021 09:58:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 27.12.21 10:02, Peter Eisentraut wrote:\n> This patch adds a new node type Boolean, to go alongside the \"value\" \n> nodes Integer, Float, String, etc.  This seems appropriate given that \n> Boolean values are a fundamental part of the system and are used a lot.\n> \n> Before, SQL-level Boolean constants were represented by a string with\n> a cast, and internal Boolean values in DDL commands were usually \n> represented by Integer nodes.  This takes the place of both of these \n> uses, making the intent clearer and having some amount of type safety.\n\nHere is an update of this patch set based on the feedback. First, I \nadded a patch that makes some changes in AlterRole() that my original \npatch might have broken or at least made more confusing. Unlike in \nCreateRole(), we use three-valued logic here, so that a variable like \nissuper would have 0 = no, 1 = yes, -1 = not specified, keep previous \nvalue. I'm simplifying this, by instead using the dissuper etc. \nvariables to track whether a setting was specified. This makes \neverything a bit simpler and makes the subsequent patch easier.\n\nSecond, I added the suggest by Tom Lane to rename to fields in the \nused-to-be-Value nodes to be different in each node type (ival, fval, \netc.). I agree that this makes things a bit cleaner and reduces the \nchanges of mixups.\n\nAnd third, the original patch that introduces the Boolean node with some \nsmall changes based on the feedback.", "msg_date": "Mon, 3 Jan 2022 12:04:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 03.01.22 12:04, Peter Eisentraut wrote:\n> On 27.12.21 10:02, Peter Eisentraut wrote:\n>> This patch adds a new node type Boolean, to go alongside the \"value\" \n>> nodes Integer, Float, String, etc.  This seems appropriate given that \n>> Boolean values are a fundamental part of the system and are used a lot.\n>>\n>> Before, SQL-level Boolean constants were represented by a string with\n>> a cast, and internal Boolean values in DDL commands were usually \n>> represented by Integer nodes.  This takes the place of both of these \n>> uses, making the intent clearer and having some amount of type safety.\n> \n> Here is an update of this patch set based on the feedback.  First, I \n> added a patch that makes some changes in AlterRole() that my original \n> patch might have broken or at least made more confusing.  Unlike in \n> CreateRole(), we use three-valued logic here, so that a variable like \n> issuper would have 0 = no, 1 = yes, -1 = not specified, keep previous \n> value.  I'm simplifying this, by instead using the dissuper etc. \n> variables to track whether a setting was specified.  This makes \n> everything a bit simpler and makes the subsequent patch easier.\n> \n> Second, I added the suggest by Tom Lane to rename to fields in the \n> used-to-be-Value nodes to be different in each node type (ival, fval, \n> etc.).  I agree that this makes things a bit cleaner and reduces the \n> changes of mixups.\n> \n> And third, the original patch that introduces the Boolean node with some \n> small changes based on the feedback.\n\nAnother very small update, attempting to appease the cfbot.", "msg_date": "Mon, 3 Jan 2022 14:18:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "Hi\n\npo 3. 1. 2022 v 14:18 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n>\n> On 03.01.22 12:04, Peter Eisentraut wrote:\n> > On 27.12.21 10:02, Peter Eisentraut wrote:\n> >> This patch adds a new node type Boolean, to go alongside the \"value\"\n> >> nodes Integer, Float, String, etc. This seems appropriate given that\n> >> Boolean values are a fundamental part of the system and are used a lot.\n> >>\n> >> Before, SQL-level Boolean constants were represented by a string with\n> >> a cast, and internal Boolean values in DDL commands were usually\n> >> represented by Integer nodes. This takes the place of both of these\n> >> uses, making the intent clearer and having some amount of type safety.\n> >\n> > Here is an update of this patch set based on the feedback. First, I\n> > added a patch that makes some changes in AlterRole() that my original\n> > patch might have broken or at least made more confusing. Unlike in\n> > CreateRole(), we use three-valued logic here, so that a variable like\n> > issuper would have 0 = no, 1 = yes, -1 = not specified, keep previous\n> > value. I'm simplifying this, by instead using the dissuper etc.\n> > variables to track whether a setting was specified. This makes\n> > everything a bit simpler and makes the subsequent patch easier.\n> >\n> > Second, I added the suggest by Tom Lane to rename to fields in the\n> > used-to-be-Value nodes to be different in each node type (ival, fval,\n> > etc.). I agree that this makes things a bit cleaner and reduces the\n> > changes of mixups.\n> >\n> > And third, the original patch that introduces the Boolean node with some\n> > small changes based on the feedback.\n>\n> Another very small update, attempting to appease the cfbot.\n\n\nThis is almost trivial patch\n\nThere are not problems with patching, compilation and tests\n\nmake check-world passed\n\nThere are not objection from me or from community\n\nI'll mark this patch as ready for committer\n\nRegards\n\nPavel\n\nHipo 3. 1. 2022 v 14:18 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:\nOn 03.01.22 12:04, Peter Eisentraut wrote:\n> On 27.12.21 10:02, Peter Eisentraut wrote:\n>> This patch adds a new node type Boolean, to go alongside the \"value\" \n>> nodes Integer, Float, String, etc.  This seems appropriate given that \n>> Boolean values are a fundamental part of the system and are used a lot.\n>>\n>> Before, SQL-level Boolean constants were represented by a string with\n>> a cast, and internal Boolean values in DDL commands were usually \n>> represented by Integer nodes.  This takes the place of both of these \n>> uses, making the intent clearer and having some amount of type safety.\n> \n> Here is an update of this patch set based on the feedback.  First, I \n> added a patch that makes some changes in AlterRole() that my original \n> patch might have broken or at least made more confusing.  Unlike in \n> CreateRole(), we use three-valued logic here, so that a variable like \n> issuper would have 0 = no, 1 = yes, -1 = not specified, keep previous \n> value.  I'm simplifying this, by instead using the dissuper etc. \n> variables to track whether a setting was specified.  This makes \n> everything a bit simpler and makes the subsequent patch easier.\n> \n> Second, I added the suggest by Tom Lane to rename to fields in the \n> used-to-be-Value nodes to be different in each node type (ival, fval, \n> etc.).  I agree that this makes things a bit cleaner and reduces the \n> changes of mixups.\n> \n> And third, the original patch that introduces the Boolean node with some \n> small changes based on the feedback.\n\nAnother very small update, attempting to appease the cfbot.This is almost trivial patchThere are not problems with patching, compilation and testsmake check-world passedThere are not objection from me or from communityI'll mark this patch as ready for committerRegardsPavel", "msg_date": "Thu, 13 Jan 2022 10:48:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add Boolean node" }, { "msg_contents": "On 13.01.22 10:48, Pavel Stehule wrote:\n> There are not objection from me or from community\n> \n> I'll mark this patch as ready for committer\n\nThis patch set has been committed.\n\n\n", "msg_date": "Mon, 17 Jan 2022 17:16:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" } ]
[ { "msg_contents": "Hi Tom\n\nI would like to ask you about the details of index build.\nI found that in the index_update_stats function, i.e. the CREATE INDEX/REINDEX/Truncate INDEX process,\nrelchche is invalidated whether the index information is updated. I want to know why you're did this\nThe code is:\n\t\tif (dirty)\n\t\t {\n\t\t\theap_inplace_update(pg_class, tuple);\n\t\t\t/* the above sends a cache inval message */ } \n\t\telse \n\t\t{\n\t\t\t /* no need to change tuple, but force relcache inval anyway */ \n\t\t\t CacheInvalidateRelcacheByTuple(tuple); \n\t\t}\n\nThere's a special line of comment here, and I think you wrote that part for some reason.\n\nThe reason I ask this question is that \n1 similar places like the vac_update_relstats /vac_update_datfrozenxid function don't do this.\n2 Local Temp table with ON COMMIT DELETE ROWS builds index for each transaction commit.\nThis causes relcache of the temp table to be rebuilt over and over again.\n\nLooking forward to your reply.\n\nThanks\n\n\nWenjing\n\n\n\n\n", "msg_date": "Mon, 27 Dec 2021 17:09:31 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": true, "msg_subject": "why does reindex invalidate relcache without modifying system tables" }, { "msg_contents": "wenjing zeng <wjzeng2012@gmail.com> writes:\n> I found that in the index_update_stats function, i.e. the CREATE INDEX/REINDEX/Truncate INDEX process,\n> relchche is invalidated whether the index information is updated. I want to know why you're did this\n\nDid you read the function's header comment? It says\n\n * NOTE: an important side-effect of this operation is that an SI invalidation\n * message is sent out to all backends --- including me --- causing relcache\n * entries to be flushed or updated with the new data. This must happen even\n * if we find that no change is needed in the pg_class row. When updating\n * a heap entry, this ensures that other backends find out about the new\n * index. When updating an index, it's important because some index AMs\n * expect a relcache flush to occur after REINDEX.\n\nThat is, what we need to force an update of is either the relcache's\nrd_indexlist list (for a table) or rd_amcache (for an index).\n\nIn the REINDEX case, we could conceivably skip the flush on the table,\nbut not on the index. I don't think it's worth worrying about though,\nbecause REINDEX will very probably have an update for the table's\nphysical size data (relpages and/or reltuples), so that it's unlikely\nthat the no-change path would be taken anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Dec 2021 10:54:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: why does reindex invalidate relcache without modifying system\n tables" }, { "msg_contents": "\n\n> 2021年12月27日 23:54,Tom Lane <tgl@sss.pgh.pa.us> 写道:\n> \n> wenjing zeng <wjzeng2012@gmail.com> writes:\n>> I found that in the index_update_stats function, i.e. the CREATE INDEX/REINDEX/Truncate INDEX process,\n>> relchche is invalidated whether the index information is updated. I want to know why you're did this\n> \n> Did you read the function's header comment? It says\n> \n> * NOTE: an important side-effect of this operation is that an SI invalidation\n> * message is sent out to all backends --- including me --- causing relcache\n> * entries to be flushed or updated with the new data. This must happen even\n> * if we find that no change is needed in the pg_class row. When updating\n> * a heap entry, this ensures that other backends find out about the new\n> * index. When updating an index, it's important because some index AMs\n> * expect a relcache flush to occur after REINDEX.\n> \n> That is, what we need to force an update of is either the relcache's\n> rd_indexlist list (for a table) or rd_amcache (for an index).\n> \n> In the REINDEX case, we could conceivably skip the flush on the table,\n> but not on the index. I don't think it's worth worrying about though,\n> because REINDEX will very probably have an update for the table's\n> physical size data (relpages and/or reltuples), so that it's unlikely\n> that the no-change path would be taken anyway.\n> \n> \t\t\tregards, tom lane\nThank you for your explanation, which clears up my doubts.\n\nWenjing\n\n", "msg_date": "Tue, 4 Jan 2022 11:46:45 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": true, "msg_subject": "Re: why does reindex invalidate relcache without modifying system\n tables" } ]
[ { "msg_contents": "Hi,\n\nCan the postgres server ever have/generate out of sequence WAL files?\nFor instance, 000000010000020C000000A2, 000000010000020C000000A3,\n000000010000020C000000A5 and so on, missing 000000010000020C000000A4.\nManual/Accidental deletion of the WAL files can happes, but are there\nany other extreme situations (like recycling, removing old WAL files\netc.) caused by the postgres server leading to missing WAL files?\n\nWhat happens when postgres server finds missing WAL file during\ncrash/standby recovery?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 28 Dec 2021 07:45:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Can there ever be out of sequence WAL files?" }, { "msg_contents": "On Tue, Dec 28, 2021 at 7:45 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Can the postgres server ever have/generate out of sequence WAL files?\n> For instance, 000000010000020C000000A2, 000000010000020C000000A3,\n> 000000010000020C000000A5 and so on, missing 000000010000020C000000A4.\n> Manual/Accidental deletion of the WAL files can happes, but are there\n> any other extreme situations (like recycling, removing old WAL files\n> etc.) caused by the postgres server leading to missing WAL files?\n>\n> What happens when postgres server finds missing WAL file during\n> crash/standby recovery?\n>\n> Thoughts?\n\nHi Hackers, a gentle ping for the above question. I think I sent it\nearlier during the holiday season.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 12 Jan 2022 07:19:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can there ever be out of sequence WAL files?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 07:19:48AM +0530, Bharath Rupireddy wrote:\n> >\n> > Can the postgres server ever have/generate out of sequence WAL files?\n> > For instance, 000000010000020C000000A2, 000000010000020C000000A3,\n> > 000000010000020C000000A5 and so on, missing 000000010000020C000000A4.\n> > Manual/Accidental deletion of the WAL files can happes, but are there\n> > any other extreme situations (like recycling, removing old WAL files\n> > etc.) caused by the postgres server leading to missing WAL files?\n\nBy definition there shouldn't be such situation, as it would otherwise be a\n(critical) bug.\n\n> > What happens when postgres server finds missing WAL file during\n> > crash/standby recovery?\n\nThe recovery should fail.\n\n\n", "msg_date": "Wed, 12 Jan 2022 10:18:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can there ever be out of sequence WAL files?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 10:18:11AM +0800, Julien Rouhaud wrote:\n> On Wed, Jan 12, 2022 at 07:19:48AM +0530, Bharath Rupireddy wrote:\n>>> Can the postgres server ever have/generate out of sequence WAL files?\n>>> For instance, 000000010000020C000000A2, 000000010000020C000000A3,\n>>> 000000010000020C000000A5 and so on, missing 000000010000020C000000A4.\n>>> Manual/Accidental deletion of the WAL files can happes, but are there\n>>> any other extreme situations (like recycling, removing old WAL files\n>>> etc.) caused by the postgres server leading to missing WAL files?\n> \n> By definition there shouldn't be such situation, as it would otherwise be a\n> (critical) bug.\n\nI have seen that in the past, in cases where a system got harshly\ndeplugged then replugged where a segment file flush got missing. But\nthat was just a flacky system, Postgres relied just on something\nwrong. So the answer is that this should not happen. \n\n>>> What happens when postgres server finds missing WAL file during\n>>> crash/standby recovery?\n> \n> The recovery should fail.\n\nxlog.c can be a good read to check the assumptions WAL replay relies\non, with things like CheckRecoveryConsistency() or\nreachedConsistency.\n--\nMichael", "msg_date": "Wed, 12 Jan 2022 13:10:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can there ever be out of sequence WAL files?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 01:10:25PM +0900, Michael Paquier wrote:\n> \n> xlog.c can be a good read to check the assumptions WAL replay relies\n> on, with things like CheckRecoveryConsistency() or\n> reachedConsistency.\n\nThat should only stand for a WAL expected to be missing right? For something\nunexpected it should fail in XLogReadRecord() when trying to fetch a missing\nblock?\n\n\n", "msg_date": "Wed, 12 Jan 2022 12:23:00 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can there ever be out of sequence WAL files?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 12:23:00PM +0800, Julien Rouhaud wrote:\n> On Wed, Jan 12, 2022 at 01:10:25PM +0900, Michael Paquier wrote:\n>> xlog.c can be a good read to check the assumptions WAL replay relies\n>> on, with things like CheckRecoveryConsistency() or\n>> reachedConsistency.\n> \n> That should only stand for a WAL expected to be missing right? For something\n> unexpected it should fail in XLogReadRecord() when trying to fetch a missing\n> block?\n\nSure, as well as there are sanity checks related to invalid page\nreferences when it comes to the consistency checks.\n--\nMichael", "msg_date": "Mon, 17 Jan 2022 16:15:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can there ever be out of sequence WAL files?" } ]
[ { "msg_contents": "Hi,\nFor buildDefItem():\n\n+ if (strcmp(val, \"true\") == 0)\n+ return makeDefElem(pstrdup(name),\n+ (Node *) makeBoolean(true),\n+ -1);\n+ if (strcmp(val, \"false\") == 0)\n\nShould 'TRUE' / 'FALSE' be considered above ?\n\n- issuper = intVal(dissuper->arg) != 0;\n+ issuper = boolVal(dissuper->arg) != 0;\n\nCan the above be written as (since issuper is a bool):\n\n+ issuper = boolVal(dissuper->arg);\n\nCheers\n\nHi,For buildDefItem():+       if (strcmp(val, \"true\") == 0)+           return makeDefElem(pstrdup(name),+                              (Node *) makeBoolean(true),+                              -1);+       if (strcmp(val, \"false\") == 0)Should 'TRUE' / 'FALSE' be considered above ?-       issuper = intVal(dissuper->arg) != 0;+       issuper = boolVal(dissuper->arg) != 0;Can the above be written as (since issuper is a bool):+       issuper = boolVal(dissuper->arg);Cheers", "msg_date": "Mon, 27 Dec 2021 19:19:31 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: Add Boolean node" } ]
[ { "msg_contents": "Folks,\nWhile experimenting with toast tables I noticed that the toast index lands in the same tablespace as the toast table itself. Is there a way to make the toast indexes create in a different tablespace?\n\nPhil Godfrin | Database Administration\nNOV\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\nM 281.825.2311\nE Philippe.Godfrin@nov.com<mailto:Philippe.Godfrin@nov.com>\n\n\n\n\n\n\n\n\n\n\nFolks,\nWhile experimenting with toast tables I noticed that the toast index lands in the same tablespace as the toast table itself. Is there a way to make the toast indexes create in a different tablespace?\n \nPhil Godfrin |\nDatabase Administration\nNOV\n\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\n\nM \n281.825.2311\nE  \nPhilippe.Godfrin@nov.com", "msg_date": "Tue, 28 Dec 2021 13:10:53 +0000", "msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>", "msg_from_op": true, "msg_subject": "toast tables and toast indexes" }, { "msg_contents": "On Tue, Dec 28, 2021 at 01:10:53PM +0000, Godfrin, Philippe E wrote:\n> While experimenting with toast tables I noticed that the toast index\n> lands in the same tablespace as the toast table itself. Is there a\n> way to make the toast indexes create in a different tablespace?\n\nNo. See create_toast_table() where the toast table and its index use\nthe same tablespace as the relation they depend on.\n\nNow, you could use allow_system_table_mods and an ALTER INDEX .. SET\nTABLESPACE to change that for the index, but this is a developer\noption and you should *not* use that for anything serious:\nhttps://www.postgresql.org/docs/devel/runtime-config-developer.html\n\n\"Ill-advised use of this setting can cause irretrievable data loss or\nseriously corrupt the database system.\"\n--\nMichael", "msg_date": "Tue, 4 Jan 2022 21:30:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: toast tables and toast indexes" } ]
[ { "msg_contents": "PFA a patch to extend the compatibility of PostgreSQL::Test::Cluster to\nall live branches. It does this by introducing a couple of subclasses\nwhich override a few things. The required class is automatically\ndetected and used, so users don't need to specify a subclass. Although\nthis is my work it draws some inspiration from work by Jehan-Guillaume\nde Rorthais. The aim here is to provide minimal disruption to the\nmainline code, and also to have very small override subroutines.\n\nMy hope is to take this further, down to 9.2, which we recently decided\nto give limited build support to. However I think the present patch is a\ngood stake to put into the ground.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 28 Dec 2021 09:30:24 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "On 12/28/21 09:30, Andrew Dunstan wrote:\n> PFA a patch to extend the compatibility of PostgreSQL::Test::Cluster to\n> all live branches. It does this by introducing a couple of subclasses\n> which override a few things. The required class is automatically\n> detected and used, so users don't need to specify a subclass. Although\n> this is my work it draws some inspiration from work by Jehan-Guillaume\n> de Rorthais. The aim here is to provide minimal disruption to the\n> mainline code, and also to have very small override subroutines.\n>\n> My hope is to take this further, down to 9.2, which we recently decided\n> to give limited build support to. However I think the present patch is a\n> good stake to put into the ground.\n\n\n\nThis version handles older versions for which we have no subclass more\ngracefully.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 28 Dec 2021 11:46:53 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> +\t\tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n> +\t\tbless $node, $subclass;\n> +\t\tunless ($node->isa(__PACKAGE__))\n> +\t\t{\n> +\t\t\t# It's not a subclass, so re-bless back into the main package\n> +\t\t\tbless($node, __PACKAGE__);\n> +\t\t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n> +\t\t}\n\nThe ->isa() method works on package names as well as blessed objects, so\nthe back-and-forth blessing can be avoided.\n\n\tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n\tif ($subclass->isa(__PACKAGE__))\n\t{\n\t\tbless($node, $subclass);\n\t}\n\telse\n\t{\n\t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n\t}\n\n- ilmari\n\n\n", "msg_date": "Fri, 31 Dec 2021 16:20:03 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "\nOn 12/31/21 11:20, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>\n>> +\t\tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n>> +\t\tbless $node, $subclass;\n>> +\t\tunless ($node->isa(__PACKAGE__))\n>> +\t\t{\n>> +\t\t\t# It's not a subclass, so re-bless back into the main package\n>> +\t\t\tbless($node, __PACKAGE__);\n>> +\t\t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n>> +\t\t}\n> The ->isa() method works on package names as well as blessed objects, so\n> the back-and-forth blessing can be avoided.\n>\n> \tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n> \tif ($subclass->isa(__PACKAGE__))\n> \t{\n> \t\tbless($node, $subclass);\n> \t}\n> \telse\n> \t{\n> \t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n> \t}\n>\n\nOK, thanks, will fix in next version.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 31 Dec 2021 11:22:49 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "On 12/31/21 11:22, Andrew Dunstan wrote:\n> On 12/31/21 11:20, Dagfinn Ilmari Mannsåker wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>\n>>> +\t\tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n>>> +\t\tbless $node, $subclass;\n>>> +\t\tunless ($node->isa(__PACKAGE__))\n>>> +\t\t{\n>>> +\t\t\t# It's not a subclass, so re-bless back into the main package\n>>> +\t\t\tbless($node, __PACKAGE__);\n>>> +\t\t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n>>> +\t\t}\n>> The ->isa() method works on package names as well as blessed objects, so\n>> the back-and-forth blessing can be avoided.\n>>\n>> \tmy $subclass = __PACKAGE__ . \"::V_$maj\";\n>> \tif ($subclass->isa(__PACKAGE__))\n>> \t{\n>> \t\tbless($node, $subclass);\n>> \t}\n>> \telse\n>> \t{\n>> \t\tcarp \"PostgreSQL::Test::Cluster isn't fully compatible with version $ver\";\n>> \t}\n>>\n> OK, thanks, will fix in next version.\n>\n>\n\nHere's a version that does that and removes some recent bitrot.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 18 Jan 2022 18:35:39 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "On Tue, Jan 18, 2022 at 06:35:39PM -0500, Andrew Dunstan wrote:\n> Here's a version that does that and removes some recent bitrot.\n\nI have been looking at the full set of features of Cluster.pm and the\nrequirements behind v10 as minimal version supported, and nothing\nreally stands out.\n\n+ # old versions of walreceiver just set the application name to\n+ # `walreceiver'\n\nPerhaps this should mention to which older versions this sentence\napplies?\n--\nMichael", "msg_date": "Fri, 21 Jan 2022 16:47:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "\nOn 1/21/22 02:47, Michael Paquier wrote:\n> On Tue, Jan 18, 2022 at 06:35:39PM -0500, Andrew Dunstan wrote:\n>> Here's a version that does that and removes some recent bitrot.\n> I have been looking at the full set of features of Cluster.pm and the\n> requirements behind v10 as minimal version supported, and nothing\n> really stands out.\n>\n> + # old versions of walreceiver just set the application name to\n> + # `walreceiver'\n>\n> Perhaps this should mention to which older versions this sentence\n> applies?\n\n\n\nWill do in the next version. FTR it's versions older than 12.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 21 Jan 2022 09:59:04 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "\nOn 1/21/22 09:59, Andrew Dunstan wrote:\n> On 1/21/22 02:47, Michael Paquier wrote:\n>> On Tue, Jan 18, 2022 at 06:35:39PM -0500, Andrew Dunstan wrote:\n>>> Here's a version that does that and removes some recent bitrot.\n>> I have been looking at the full set of features of Cluster.pm and the\n>> requirements behind v10 as minimal version supported, and nothing\n>> really stands out.\n>>\n>> + # old versions of walreceiver just set the application name to\n>> + # `walreceiver'\n>>\n>> Perhaps this should mention to which older versions this sentence\n>> applies?\n>\n>\n> Will do in the next version. FTR it's versions older than 12.\n>\n>\n\nI'm not sure why this item has been moved to the next CF without any\ndiscussion I could see on the mailing list. It was always my intention\nto commit it this time, and I propose to do so tomorrow with the comment\nMichael has requested above. The cfbot is still happy with it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 29 Mar 2022 17:56:02 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "On Tue, Mar 29, 2022 at 05:56:02PM -0400, Andrew Dunstan wrote:\n> I'm not sure why this item has been moved to the next CF without any\n> discussion I could see on the mailing list. It was always my intention\n> to commit it this time, and I propose to do so tomorrow with the comment\n> Michael has requested above. The cfbot is still happy with it.\n\nThanks for taking care of it!\n--\nMichael", "msg_date": "Wed, 30 Mar 2022 14:55:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" }, { "msg_contents": "\nOn 3/30/22 01:55, Michael Paquier wrote:\n> On Tue, Mar 29, 2022 at 05:56:02PM -0400, Andrew Dunstan wrote:\n>> I'm not sure why this item has been moved to the next CF without any\n>> discussion I could see on the mailing list. It was always my intention\n>> to commit it this time, and I propose to do so tomorrow with the comment\n>> Michael has requested above. The cfbot is still happy with it.\n> Thanks for taking care of it!\n\n\n\nCommitted.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:27:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Extend compatibility of PostgreSQL::Test::Cluster" } ]
[ { "msg_contents": "I have found some of the parse/analyze API calls confusing one too many \ntimes, so here I'm proposing some renaming and refactoring.\n\nNotionally, there are three parallel ways to call the parse/analyze \nphase: with fixed parameters (for example, used by SPI), with variable \nparameters (for example, used by PREPARE), and with a parser callback \n(for example, used to parse the body of SQL functions). Some of the \ninvolved functions were confusingly named and made this API structure \nmore confusing.\n\nFor example, at the top level there are pg_analyze_and_rewrite() and \npg_analyze_and_rewrite_params(). You'd think the first one doesn't take \nparameters and the second one takes parameters. But the truth is, the \nfirst one takes fixed parameters and the second one takes a parser \ncallback. The parser callback can be used to parse parameters, but also \nother things. There isn't any variant that takes variable parameters; \nthat code is sprinkled around other places altogether.\n\nOne level below that, there is parse_analyze() (for fixed parameters) \nand parse_analyze_varparams() (good name). But there is no analogous \nfunction for the callback variant; that code is spread out in \npg_analyze_and_rewrite_params().\n\nAnd then there are parse_fixed_parameters() and \nparse_variable_parameters(). But they don't do any parsing at all. \nThey just set up callbacks for the parsing to follow.\n\nThis doesn't need to be so confusing. With the attached patch set, the \ncalls end up:\n\npg_analyze_and_rewrite_fixedparams()\n -> parse_analyze_fixedparams()\n -> setup_parse_fixed_parameters()\n\npg_analyze_and_rewrite_varparams() [new]\n -> parse_analyze_varparams()\n -> setup_parse_variable_parameters()\n\npg_analyze_and_rewrite_withcb()\n -> parse_analyze_withcb() [new]\n -> (nothing needed here)\n\n(The \"withcb\" naming maybe isn't great; better ideas welcome.)\n\nNot included in this patch set, but food for further thought: The \npg_analyze_and_rewrite_*() functions aren't all that useful (anymore). \nOne might as well write\n\n pg_rewrite_query(parse_analyze_xxx(...))\n\nThe only things that pg_analyze_and_rewrite_*() do in addition to that \nis handle log_parser_stats, which could be moved into parse_analyze_*(), \nand TRACE_POSTGRESQL_QUERY_REWRITE_*(), which IMO doesn't make sense to \nbegin with and should be in pg_rewrite_query().", "msg_date": "Tue, 28 Dec 2021 17:22:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "parse/analyze API refactoring" }, { "msg_contents": "On 12/28/21, 8:25 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\r\n> (The \"withcb\" naming maybe isn't great; better ideas welcome.)\r\n\r\nFWIW I immediately understood that this meant \"with callback,\" so it\r\nmight be okay.\r\n\r\n> Not included in this patch set, but food for further thought: The\r\n> pg_analyze_and_rewrite_*() functions aren't all that useful (anymore).\r\n> One might as well write\r\n>\r\n> pg_rewrite_query(parse_analyze_xxx(...))\r\n\r\nI had a similar thought while reading through the patches. If further\r\ndeduplication isn't too much trouble, I'd vote for that.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 23:49:28 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: parse/analyze API refactoring" }, { "msg_contents": "You set this commit fest entry to Waiting on Author, but there were no \nreviews posted and the patch still applies and builds AFAICT, so I don't \nknow what you meant by that.\n\n\nOn 13.01.22 00:49, Bossart, Nathan wrote:\n> On 12/28/21, 8:25 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n>> (The \"withcb\" naming maybe isn't great; better ideas welcome.)\n> \n> FWIW I immediately understood that this meant \"with callback,\" so it\n> might be okay.\n> \n>> Not included in this patch set, but food for further thought: The\n>> pg_analyze_and_rewrite_*() functions aren't all that useful (anymore).\n>> One might as well write\n>>\n>> pg_rewrite_query(parse_analyze_xxx(...))\n> \n> I had a similar thought while reading through the patches. If further\n> deduplication isn't too much trouble, I'd vote for that.\n> \n> Nathan\n> \n\n\n\n", "msg_date": "Mon, 28 Feb 2022 07:46:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: parse/analyze API refactoring" }, { "msg_contents": "On Mon, Feb 28, 2022 at 07:46:40AM +0100, Peter Eisentraut wrote:\n> You set this commit fest entry to Waiting on Author, but there were no\n> reviews posted and the patch still applies and builds AFAICT, so I don't\n> know what you meant by that.\n\nApologies for the lack of clarity. I believe my only feedback was around\ndeduplicating the pg_analyze_and_rewrite_*() functions. Would you rather\nhandle that in a separate patch?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Feb 2022 10:51:21 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse/analyze API refactoring" }, { "msg_contents": "On 28.02.22 19:51, Nathan Bossart wrote:\n> On Mon, Feb 28, 2022 at 07:46:40AM +0100, Peter Eisentraut wrote:\n>> You set this commit fest entry to Waiting on Author, but there were no\n>> reviews posted and the patch still applies and builds AFAICT, so I don't\n>> know what you meant by that.\n> \n> Apologies for the lack of clarity. I believe my only feedback was around\n> deduplicating the pg_analyze_and_rewrite_*() functions. Would you rather\n> handle that in a separate patch?\n\nI have committed my original patches. I'll leave the above-mentioned \ntopic as ideas for the future.\n\n\n", "msg_date": "Wed, 9 Mar 2022 11:35:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: parse/analyze API refactoring" }, { "msg_contents": "On Wed, Mar 09, 2022 at 11:35:32AM +0100, Peter Eisentraut wrote:\n> I have committed my original patches. I'll leave the above-mentioned topic\n> as ideas for the future.\n\nSounds good.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 14:16:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse/analyze API refactoring" } ]
[ { "msg_contents": "forking <CA+TgmoawONZqEwe-GqmKERNY1ug0z1QhBzkHdA158xfToHKN9w@mail.gmail.com>\n\nOn Mon, Dec 13, 2021 at 09:01:57AM -0500, Robert Haas wrote:\n> On Thu, Dec 9, 2021 at 2:32 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > > Considering the vanishingly small number of actual complaints we've\n> > > seen about this, that sounds ridiculously over-engineered.\n> > > A documentation example should be sufficient.\n> >\n> > I don't know if this will tip the scales, but I'd like to lodge a\n> > belated complaint. I've gotten myself in this server-fails-to-start\n> > situation several times (in development, for what it's worth). The\n> > syntax (as Bharath pointed out in the original message) is pretty\n> > picky, there are no guard rails, and if you got there through ALTER\n> > SYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't\n> > up). If you go to fix it manually, you get a scary \"Do not edit this\n> > file manually!\" warning that you have to know to ignore in this case\n> > (that's if you find the file after you realize what the fairly generic\n> > \"FATAL: ... No such file or directory\" error in the log is telling\n> > you). Plus you have to get the (different!) quoting syntax right or\n> > cut your losses and delete the change.\n> \n> +1. I disagree that trying to detect this kind of problem would be\n> \"ridiculously over-engineered.\" I don't know whether it can be done\n> elegantly enough that we'd be happy with it and I don't know whether\n> it would end up just garden variety over-engineered. But there's\n> nothing ridiculous about trying to prevent people from putting their\n> system into a state where it won't start.\n> \n> (To be clear, I also think updating the documentation is sensible,\n> without taking a view on exactly what that update should look like.)\n\nYea, I think documentation won't help to avoid this issue:\n\nIf ALTER SYSTEM gives an ERROR, someone will likely to check the docs after a\nfew minutes if they know that they didn't get the correct syntax.\nBut if it gives no error nor warning, then most likely they won't know to check\nthe docs.\n\nWe should check session_preload_libraries too, right ? It's PGC_SIGHUP, so if\nsomeone sets the variable and sends sighup, clients will be rejected, and they\nhad no good opportunity to avoid that.\n\n0001 adds WARNINGs when doing SET:\n\n\tpostgres=# SET local_preload_libraries=xyz;\n\tWARNING: could not load library: xyz: cannot open shared object file: No such file or directory\n\tSET\n\n\tpostgres=# ALTER SYSTEM SET shared_preload_libraries =asdf;\n\tWARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n\tALTER SYSTEM\n\n0002 adds context when failing to start.\n\n\t2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n\t2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\n\t2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\n\t2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\n\nBut I wonder whether it'd be adequate context if dlopen were to fail rather\nthan stat() ?\n\nBefore 0003:\n\t2021-12-18 23:13:57.861 CST postmaster[11956] FATAL: could not access file \"asdf\": No such file or directory\n\t2021-12-18 23:13:57.862 CST postmaster[11956] LOG: database system is shut down\n\nAfter 0003:\n\t2021-12-18 23:16:05.719 CST postmaster[13481] FATAL: could not load library: asdf: cannot open shared object file: No such file or directory\n\t2021-12-18 23:16:05.720 CST postmaster[13481] LOG: database system is shut down\n\n-- \nJustin", "msg_date": "Tue, 28 Dec 2021 11:45:57 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "warn if GUC set to an invalid shared library" }, { "msg_contents": "On Tue, Dec 28, 2021 at 11:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> forking <CA+TgmoawONZqEwe-GqmKERNY1ug0z1QhBzkHdA158xfToHKN9w@mail.gmail.com>\n>\n> On Mon, Dec 13, 2021 at 09:01:57AM -0500, Robert Haas wrote:\n> > On Thu, Dec 9, 2021 at 2:32 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > > > Considering the vanishingly small number of actual complaints we've\n> > > > seen about this, that sounds ridiculously over-engineered.\n> > > > A documentation example should be sufficient.\n> > >\n> > > I don't know if this will tip the scales, but I'd like to lodge a\n> > > belated complaint. I've gotten myself in this server-fails-to-start\n> > > situation several times (in development, for what it's worth). The\n> > > syntax (as Bharath pointed out in the original message) is pretty\n> > > picky, there are no guard rails, and if you got there through ALTER\n> > > SYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't\n> > > up). If you go to fix it manually, you get a scary \"Do not edit this\n> > > file manually!\" warning that you have to know to ignore in this case\n> > > (that's if you find the file after you realize what the fairly generic\n> > > \"FATAL: ... No such file or directory\" error in the log is telling\n> > > you). Plus you have to get the (different!) quoting syntax right or\n> > > cut your losses and delete the change.\n> >\n> > +1. I disagree that trying to detect this kind of problem would be\n> > \"ridiculously over-engineered.\" I don't know whether it can be done\n> > elegantly enough that we'd be happy with it and I don't know whether\n> > it would end up just garden variety over-engineered. But there's\n> > nothing ridiculous about trying to prevent people from putting their\n> > system into a state where it won't start.\n> >\n> > (To be clear, I also think updating the documentation is sensible,\n> > without taking a view on exactly what that update should look like.)\n>\n> Yea, I think documentation won't help to avoid this issue:\n>\n> If ALTER SYSTEM gives an ERROR, someone will likely to check the docs after a\n> few minutes if they know that they didn't get the correct syntax.\n> But if it gives no error nor warning, then most likely they won't know to check\n> the docs.\n>\n> We should check session_preload_libraries too, right ? It's PGC_SIGHUP, so if\n> someone sets the variable and sends sighup, clients will be rejected, and they\n> had no good opportunity to avoid that.\n>\n> 0001 adds WARNINGs when doing SET:\n>\n> postgres=# SET local_preload_libraries=xyz;\n> WARNING: could not load library: xyz: cannot open shared object file: No such file or directory\n> SET\n>\n> postgres=# ALTER SYSTEM SET shared_preload_libraries =asdf;\n> WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n> ALTER SYSTEM\n>\n> 0002 adds context when failing to start.\n>\n> 2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\n> 2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\n>\n> But I wonder whether it'd be adequate context if dlopen were to fail rather\n> than stat() ?\n>\n> Before 0003:\n> 2021-12-18 23:13:57.861 CST postmaster[11956] FATAL: could not access file \"asdf\": No such file or directory\n> 2021-12-18 23:13:57.862 CST postmaster[11956] LOG: database system is shut down\n>\n> After 0003:\n> 2021-12-18 23:16:05.719 CST postmaster[13481] FATAL: could not load library: asdf: cannot open shared object file: No such file or directory\n> 2021-12-18 23:16:05.720 CST postmaster[13481] LOG: database system is shut down\n\nOverall the idea looks good to me. A warning on ALTER SYSTEM SET seems\nreasonable than nothing. ERROR isn't the way to go as it limits the\nusers of setting the extensions in shared_preload_libraries first,\ninstalling them later. Is NOTICE here a better idea than WARNING?\n\nI haven't looked at the patches though.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 30 Dec 2021 13:50:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Thanks for working on this! I tried it out and it worked for me. I\nreviewed the patch and didn't see any problems, but I'm not much of a\nC programmer.\n\nOn Tue, Dec 28, 2021 at 9:45 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> 0002 adds context when failing to start.\n>\n> 2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\n> 2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\n\nFor whatever reason, I get slightly different (and somewhat redundant)\noutput on failing to start:\n\n2022-01-08 12:59:36.784 PST [324482] WARNING: could not load library:\n$libdir/plugins/totally bogus: cannot open shared object file: No such\nfile or directory\n2022-01-08 12:59:36.787 PST [324482] FATAL: could not load library:\ntotally bogus: cannot open shared object file: No such file or\ndirectory\n2022-01-08 12:59:36.787 PST [324482] LOG: database system is shut down\n\nI'm on a pretty standard Ubuntu 20.04 (with PGDG packages installed\nfor 11, 12, and 13). I did configure to a --prefix and set\nLD_LIBRARY_PATH and PATH. Not sure if this is an issue in my\nenvironment or a platform difference?\n\nAlso, regarding the original warning:\n\n2022-01-08 12:57:24.953 PST [324338] WARNING: could not load library:\n$libdir/plugins/totally bogus: cannot open shared object file: No such\nfile or directory\n\nI think this is pretty clear to users familiar with\nshared_preload_libraries. However, for someone less experienced with\nPostgres and just following instructions on how to set up, e.g.,\nauto_explain (and making a typo), it's not clear from this message\nthat your server will fail to start again after this if you shut it\ndown (or it crashes!), and how to get out of this situation. Should we\nadd a HINT to that effect?\n\nSimilarly, for\n\n> 0001 adds WARNINGs when doing SET:\n>\n> postgres=# SET local_preload_libraries=xyz;\n> WARNING: could not load library: xyz: cannot open shared object file: No such file or directory\n> SET\n\nThis works for me, but should this explain the impact (especially if\nused with something like ALTER ROLE)? I guess that's probably because\nthe context may not be easily accessible. I think\nshared_preload_libraries is much more common, though, so I'm more\ninterested in a warning there.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sat, 8 Jan 2022 13:29:24 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Thu, Dec 30, 2021 at 12:21 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Overall the idea looks good to me. A warning on ALTER SYSTEM SET seems\n> reasonable than nothing. ERROR isn't the way to go as it limits the\n> users of setting the extensions in shared_preload_libraries first,\n> installing them later. Is NOTICE here a better idea than WARNING?\n\nI don't think so--I'm skeptical that \"updated shared_preload_libraries\nfirst, then install them\" is much more than a theoretical use case. We\nmay not want to block that off completely, but I think a warning is\nreasonable here, because you're *probably* doing something wrong if\nyou get to this message at all (and if you're not, you're probably\nfamiliar enough with Postgres to know to ignore the warning).\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sat, 8 Jan 2022 13:32:53 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sat, Jan 08, 2022 at 01:29:24PM -0800, Maciek Sakrejda wrote:\n> Thanks for working on this! I tried it out and it worked for me. I\n> reviewed the patch and didn't see any problems, but I'm not much of a\n> C programmer.\n\nThanks for looking at it. I was just hacking on it myself.\n\nUnfortunately, the output for dlopen() is not portable, which (I think) means\nmost of what I wrote can't be made to work.. Since it doesn't work to call\ndlopen() when using SET, I tried using just stat(). But that also fails on\nwindows, since one of the regression tests has an invalid filename involving\nunbalanced quotes, which cause it to return EINVAL rather than ENOENT. So SET\ncannot warn portably, unless it includes no details at all (or we specially\nhandle the windows case), or change the pre-existing regression test. But\nthere's a 2nd instability, too, apparently having to do with timing. So I'm\nplanning to drop the 0001 patch.\n\n> On Tue, Dec 28, 2021 at 9:45 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > 0002 adds context when failing to start.\n> >\n> > 2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n> > 2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\n> > 2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\n> > 2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\n> \n> For whatever reason, I get slightly different (and somewhat redundant)\n> output on failing to start:\n> \n> 2022-01-08 12:59:36.784 PST [324482] WARNING: could not load library: $libdir/plugins/totally bogus: cannot open shared object file: No such file or directory\n> 2022-01-08 12:59:36.787 PST [324482] FATAL: could not load library: totally bogus: cannot open shared object file: No such file or directory\n> 2022-01-08 12:59:36.787 PST [324482] LOG: database system is shut down\n\nI think the first WARNING is from the GUC mechanism \"setting\" the library.\nAnd then the FATAL is from trying to apply the GUC.\nIt looks like you didn't apply the 0002 patch for that test so got no CONTEXT ?\n\n$ ./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data -c shared_preload_libraries=asdf\n2022-01-08 16:05:00.050 CST postmaster[2588] FATAL: could not access file \"asdf\": No such file or directory\n2022-01-08 16:05:00.050 CST postmaster[2588] CONTEXT: while loading shared libraries for GUC \"shared_preload_libraries\"\n2022-01-08 16:05:00.050 CST postmaster[2588] LOG: database system is shut down\n\n-- \nJustin", "msg_date": "Sat, 8 Jan 2022 16:07:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sat, Jan 8, 2022 at 2:07 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Unfortunately, the output for dlopen() is not portable, which (I think) means\n> most of what I wrote can't be made to work.. Since it doesn't work to call\n> dlopen() when using SET, I tried using just stat(). But that also fails on\n> windows, since one of the regression tests has an invalid filename involving\n> unbalanced quotes, which cause it to return EINVAL rather than ENOENT. So SET\n> cannot warn portably, unless it includes no details at all (or we specially\n> handle the windows case), or change the pre-existing regression test. But\n> there's a 2nd instability, too, apparently having to do with timing. So I'm\n> planning to drop the 0001 patch.\n\nHmm. I think 001 is a big part of the usability improvement here.\nCould we not at least warn generically, without relaying the\nunderlying error? The notable thing in this situation is that the\nspecified library could not be loaded (and that it will almost\ncertainly cause problems on restart). The specific error would be nice\nto have, but it's less important. What is the timing instability?\n\n> > On Tue, Dec 28, 2021 at 9:45 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > For whatever reason, I get slightly different (and somewhat redundant)\n> > output on failing to start:\n> >\n> > 2022-01-08 12:59:36.784 PST [324482] WARNING: could not load library: $libdir/plugins/totally bogus: cannot open shared object file: No such file or directory\n> > 2022-01-08 12:59:36.787 PST [324482] FATAL: could not load library: totally bogus: cannot open shared object file: No such file or directory\n> > 2022-01-08 12:59:36.787 PST [324482] LOG: database system is shut down\n>\n> I think the first WARNING is from the GUC mechanism \"setting\" the library.\n> And then the FATAL is from trying to apply the GUC.\n> It looks like you didn't apply the 0002 patch for that test so got no CONTEXT ?\n\nI still had the terminal open where I tested this, and the scrollback\ndid show me applying the patch (and building after). I tried a make\nclean and applying the patch again, and I do see the CONTEXT line now.\nI'm not sure what the problem was but seems like PEBKAC--sorry about\nthat.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sun, 9 Jan 2022 11:58:18 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sun, Jan 09, 2022 at 11:58:18AM -0800, Maciek Sakrejda wrote:\n> On Sat, Jan 8, 2022 at 2:07 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Unfortunately, the output for dlopen() is not portable, which (I think) means\n> > most of what I wrote can't be made to work.. Since it doesn't work to call\n> > dlopen() when using SET, I tried using just stat(). But that also fails on\n> > windows, since one of the regression tests has an invalid filename involving\n> > unbalanced quotes, which cause it to return EINVAL rather than ENOENT. So SET\n> > cannot warn portably, unless it includes no details at all (or we specially\n> > handle the windows case), or change the pre-existing regression test. But\n> > there's a 2nd instability, too, apparently having to do with timing. So I'm\n> > planning to drop the 0001 patch.\n> \n> Hmm. I think 001 is a big part of the usability improvement here.\n\nI agree - it helps people avoid causing a disruption, rather than just helping\nthem to fix it faster.\n\n> Could we not at least warn generically, without relaying the\n> underlying error? The notable thing in this situation is that the\n> specified library could not be loaded (and that it will almost\n> certainly cause problems on restart). The specific error would be nice\n> to have, but it's less important. What is the timing instability?\n\nI saw regression diffs like this, showing that the warning could be displayed\nbefore or after the SELECT was echoed.\n\nhttps://cirrus-ci.com/task/6301672321318912\n -SELECT * FROM schema4.counted;\n WARNING: could not load library: $libdir/plugins/worker_spi: cannot open shared object file: No such file or directory\n +SELECT * FROM schema4.counted;\n\nIt's certainly possible to show a static message without additional text from\nerrno.\n\nOn Tue, Dec 28, 2021 at 9:45 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > For whatever reason, I get slightly different (and somewhat redundant)\n> > > output on failing to start:\n> > >\n> > > 2022-01-08 12:59:36.784 PST [324482] WARNING: could not load library: $libdir/plugins/totally bogus: cannot open shared object file: No such file or directory\n> > > 2022-01-08 12:59:36.787 PST [324482] FATAL: could not load library: totally bogus: cannot open shared object file: No such file or directory\n> > > 2022-01-08 12:59:36.787 PST [324482] LOG: database system is shut down\n> >\n> > I think the first WARNING is from the GUC mechanism \"setting\" the library.\n> > And then the FATAL is from trying to apply the GUC.\n> > It looks like you didn't apply the 0002 patch for that test so got no CONTEXT ?\n> \n> I still had the terminal open where I tested this, and the scrollback\n> did show me applying the patch (and building after). I tried a make\n> clean and applying the patch again, and I do see the CONTEXT line now.\n> I'm not sure what the problem was but seems like PEBKAC--sorry about\n> that.\n\nMaybe you missed \"make install\" or similar.\n\nI took the liberty of adding you as a reviewer here:\nhttps://commitfest.postgresql.org/36/3482/\n\n-- \nJustin", "msg_date": "Thu, 27 Jan 2022 16:01:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Tue, Dec 28, 2021 at 12:45 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> 0002 adds context when failing to start.\n>\n> 2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\n> 2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\n> 2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\n\n-1 from me on using \"guc\" in any user-facing error message. And even\nguc -> setting isn't a big improvement. If we're going to structure\nthe reporting this way there, we should try to use a meaningful phrase\nthere, probably beginning with the word \"while\"; see \"git grep\nerrcontext.*while\" for interesting precedents.\n\nThat said, that series of messages seems a bit suspect to me, because\nthe WARNING seems to be stating the same problem as the subsequent\nFATAL and CONTEXT lines. Ideally we'd tighten that somehow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 09:42:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHello\r\n\r\nI tested the patches on master branch on Ubuntu 18.04 and regression turns out fine. I did a manual test following the query examples in this email thread and I do see the warnings when I attempted: these queries:\r\n\r\npostgres=# SET local_preload_libraries=xyz.so;\r\n2022-01-28 15:11:00.592 PST [13622] WARNING: could not access file \"xyz.so\"\r\nWARNING: could not access file \"xyz.so\"\r\nSET\r\npostgres=# ALTER SYSTEM SET shared_preload_libraries=abc.so;\r\n2022-01-28 15:11:07.729 PST [13622] WARNING: could not access file \"$libdir/plugins/abc.so\"\r\nWARNING: could not access file \"$libdir/plugins/abc.so\"\r\nALTER SYSTEM\r\n\r\nThis is fine as this is what these patches are aiming to provide. However, when I try to restart the server, it fails to start because abc.so and xyz.so do not exist. Setting the parameters \"local_preload_libraries\" and \"local_preload_libraries\" to something else in postgresql.conf does not seem to take effect either.\r\nIt still complains shared_preload_libraries abc.so does not exist even though I have already set shared_preload_libraries to something else in postgresql.conf. This seems a little strange to me \r\n\r\nthank you\r\nCary", "msg_date": "Fri, 28 Jan 2022 23:36:20 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Thanks for loooking\n\nOn Fri, Jan 28, 2022 at 11:36:20PM +0000, Cary Huang wrote:\n> This is fine as this is what these patches are aiming to provide. However, when I try to restart the server, it fails to start because abc.so and xyz.so do not exist. Setting the parameters \"local_preload_libraries\" and \"local_preload_libraries\" to something else in postgresql.conf does not seem to take effect either.\n> It still complains shared_preload_libraries abc.so does not exist even though I have already set shared_preload_libraries to something else in postgresql.conf. This seems a little strange to me \n\nCould you show exactly what you did and the output ?\n\nThe patches don't entirely prevent someone from putting the server config into\na bad state. It only aims to tell them if they've done that, so they can fix\nit, rather than letting someone (else) find the error at some later (probably\ninconvenient) time.\n\nALTER SYSTEM adds config into postgresql.auto.conf. If you stop the server\nafter adding bad config there (after getting a warning), the server won't\nstart. Once the server is off, you have to remove it manually.\n\nThe goal of the patch is to 1) warn someone that they've put a bad config in\nplace, so they don't leave it there; and, 2) if the server fails to start for\nsuch a reason, provide a CONTEXT line to help them resolve it quickly.\n\nMaybe you know all that and I didn't understand what you're saying.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 28 Jan 2022 18:09:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI tried the latest version of the patch, and it works as discussed. There is no documentation, but I think that's moot for this warning (we may want to note something in the setting docs, but even if so, I think we should figure out the message first and then decide if it merits additional explanation in the docs). I do not know whether it is spec-compliant, but I doubt the spec has much to say on something like this.\r\n\r\nI tried running ALTER SYSTEM and got the warnings as expected:\r\n\r\npostgres=# alter system set shared_preload_libraries = no_such_library,not_this_one_either;\r\nWARNING: could not access file \"$libdir/plugins/no_such_library\"\r\nWARNING: could not access file \"$libdir/plugins/not_this_one_either\"\r\nALTER SYSTEM\r\n\r\nI think this is great, but it would be really helpful to also indicate that at this point the server will fail to come back up after a restart. In my mind, that's a big part of the reason for having a warning here. Having made this mistake a couple of times, I would be able to read between the lines, as would many other users, but if you're not familiar with Postgres this might still be pretty opaque. I think if I'm reading the code correctly, this warning path is shared between ALTER SYSTEM and a SET of local_preload_libraries so it might be tricky to word this in a way that works in all situations, but it could make the precarious situation a lot clearer. I don't really know a good wording here, but maybe a hint like \"The server or session will not be able to start if it has been configured to use libraries that cannot be loaded.\"?\r\n\r\nAlso, there are two sides to this: one is actually applying the possibly-bogus setting, and the other is when that setting takes effect (e.g., attempting to start the server or to start a new session). I think Robert had good feedback regarding the latter:\r\n\r\nOn Fri, Jan 28, 2022 at 6:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> On Tue, Dec 28, 2021 at 12:45 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> > 0002 adds context when failing to start.\r\n> >\r\n> > 2021-12-27 17:01:12.996 CST postmaster[1403] WARNING: could not load library: $libdir/plugins/asdf: cannot open shared object file: No such file or directory\r\n> > 2021-12-27 17:01:14.938 CST postmaster[1403] FATAL: could not access file \"asdf\": No such file or directory\r\n> > 2021-12-27 17:01:14.938 CST postmaster[1403] CONTEXT: guc \"shared_preload_libraries\"\r\n> > 2021-12-27 17:01:14.939 CST postmaster[1403] LOG: database system is shut down\r\n>\r\n> -1 from me on using \"guc\" in any user-facing error message. And even\r\n> guc -> setting isn't a big improvement. If we're going to structure\r\n> the reporting this way there, we should try to use a meaningful phrase\r\n> there, probably beginning with the word \"while\"; see \"git grep\r\n> errcontext.*while\" for interesting precedents.\r\n>\r\n> That said, that series of messages seems a bit suspect to me, because\r\n> the WARNING seems to be stating the same problem as the subsequent\r\n> FATAL and CONTEXT lines. Ideally we'd tighten that somehow.\r\n\r\nMaybe we don't even need the WARNING in this case? At this point, it's clear what the problem is. I think the CONTEXT line does actually help, because otherwise it's not clear why the server failed to start, but the warning does seem superfluous here. I do agree that GUC is awkward here, and I like the \"while...\" wording suggested both for consistency with other messages and how it could work here:\r\n\r\nCONTEXT: while loading \"shared_preload_libraries\"\r\n\r\nI think that would be pretty clear. In the ALTER SYSTEM case, you still need to know to edit the file in spite of the warning telling you not to edit it, but I think that's still better. Based on Cary's feedback, maybe that could be clearer, too (like you, I'm not sure if I understood what he did correctly), but I think that could certainly be future work.\r\n\r\nThanks,\r\nMaciek", "msg_date": "Wed, 02 Feb 2022 06:06:01 +0000", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Tue, Feb 1, 2022 at 11:06 PM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> I tried running ALTER SYSTEM and got the warnings as expected:\n>\n> postgres=# alter system set shared_preload_libraries =\n> no_such_library,not_this_one_either;\n> WARNING: could not access file \"$libdir/plugins/no_such_library\"\n> WARNING: could not access file \"$libdir/plugins/not_this_one_either\"\n> ALTER SYSTEM\n>\n> I think this is great, but it would be really helpful to also indicate\n> that at this point the server will fail to come back up after a restart. In\n> my mind, that's a big part of the reason for having a warning here. Having\n> made this mistake a couple of times, I would be able to read between the\n> lines, as would many other users, but if you're not familiar with Postgres\n> this might still be pretty opaque.\n\n\n+1\n\nI would at least consider having the UX go something like:\n\npostgres=# ALTER SYSTEM SET shared_preload_libraries = not_such_library;\nERROR: <paraphrase: your system will not reboot in its current state as\nthat library is not present>.\nHINT: to bypass the error please add FORCE before SET\npostgres=# ALTER SYSTEM FORCE SET shared_preload_libraries =\nno_such_library;\nNOTICE: Error suppressed while setting shared_preload_libraries.\n\nThat is, have the user express their desire to leave the system in a\nprecarious state explicitly before actually doing so.\n\nUpon startup, if the system already can track each separate location that\nshared_preload_libraries is set, printing out those locations and current\nvalues would be useful context. Seeing ALTER SYSTEM in that listing would\nbe helpful.\n\nDavid J.\n\nOn Tue, Feb 1, 2022 at 11:06 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:I tried running ALTER SYSTEM and got the warnings as expected:\n\npostgres=# alter system set shared_preload_libraries = no_such_library,not_this_one_either;\nWARNING:  could not access file \"$libdir/plugins/no_such_library\"\nWARNING:  could not access file \"$libdir/plugins/not_this_one_either\"\nALTER SYSTEM\n\nI think this is great, but it would be really helpful to also indicate that at this point the server will fail to come back up after a restart. In my mind, that's a big part of the reason for having a warning here. Having made this mistake a couple of times, I would be able to read between the lines, as would many other users, but if you're not familiar with Postgres this might still be pretty opaque.+1I would at least consider having the UX go something like:postgres=# ALTER SYSTEM SET shared_preload_libraries = not_such_library;ERROR: <paraphrase: your system will not reboot in its current state as that library is not present>.HINT: to bypass the error please add FORCE before SETpostgres=# ALTER SYSTEM FORCE SET shared_preload_libraries = no_such_library;NOTICE: Error suppressed while setting shared_preload_libraries.That is, have the user express their desire to leave the system in a precarious state explicitly before actually doing so.Upon startup, if the system already can track each separate location that shared_preload_libraries is set, printing out those locations and current values would be useful context.  Seeing ALTER SYSTEM in that listing would be helpful.David J.", "msg_date": "Wed, 2 Feb 2022 08:39:03 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Wed, Feb 2, 2022 at 7:39 AM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> I would at least consider having the UX go something like:\n>\n> postgres=# ALTER SYSTEM SET shared_preload_libraries = not_such_library;\n> ERROR: <paraphrase: your system will not reboot in its current state as\n> that library is not present>.\n> HINT: to bypass the error please add FORCE before SET\n> postgres=# ALTER SYSTEM FORCE SET shared_preload_libraries =\n> no_such_library;\n> NOTICE: Error suppressed while setting shared_preload_libraries.\n>\n> That is, have the user express their desire to leave the system in a\n> precarious state explicitly before actually doing so.\n>\n\nWhile I don't have a problem with that behavior, given that there are\ncurrently no such facilities for asserting \"yes, really\" with ALTER SYSTEM,\nI don't think it's worth introducing that just for this patch. A warning\nseems like a reasonable first step. This can always be expanded later. I'd\nrather see a warning ship than move the goalposts out of reach.\n\nOn Wed, Feb 2, 2022 at 7:39 AM David G. Johnston <david.g.johnston@gmail.com> wrote:I would at least consider having the UX go something like:postgres=# ALTER SYSTEM SET shared_preload_libraries = not_such_library;ERROR: <paraphrase: your system will not reboot in its current state as that library is not present>.HINT: to bypass the error please add FORCE before SETpostgres=# ALTER SYSTEM FORCE SET shared_preload_libraries = no_such_library;NOTICE: Error suppressed while setting shared_preload_libraries.That is, have the user express their desire to leave the system in a precarious state explicitly before actually doing so.While I don't have a problem with that behavior, given that there are currently no such facilities for asserting \"yes, really\" with ALTER SYSTEM, I don't think it's worth introducing that just for this patch. A warning seems like a reasonable first step. This can always be expanded later. I'd rather see a warning ship than move the goalposts out of reach.", "msg_date": "Thu, 3 Feb 2022 20:36:27 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Jan 28, 2022 at 09:42:17AM -0500, Robert Haas wrote:\n> -1 from me on using \"guc\" in any user-facing error message. And even\n> guc -> setting isn't a big improvement. If we're going to structure\n> the reporting this way there, we should try to use a meaningful phrase\n> there, probably beginning with the word \"while\"; see \"git grep\n> errcontext.*while\" for interesting precedents.\n\nOn Wed, Feb 02, 2022 at 06:06:01AM +0000, Maciek Sakrejda wrote:\n> I do agree that GUC is awkward here, and I like the \"while...\" wording suggested both for consistency with other messages and how it could work here:\n> CONTEXT: while loading \"shared_preload_libraries\"\n\nFYI, it has said \"while...\" and hasn't said \"guc\" since the 2nd revision of the\npatch.\n\n> That said, that series of messages seems a bit suspect to me, because\n> the WARNING seems to be stating the same problem as the subsequent\n> FATAL and CONTEXT lines. Ideally we'd tighten that somehow.\n\nI avoided the warning by checking IsUnderPostmaster, though I'm not sure if\nthat's the right condition..\n\nOn Wed, Feb 02, 2022 at 06:06:01AM +0000, Maciek Sakrejda wrote:\n> I think this is great, but it would be really helpful to also indicate that at this point the server will fail to come back up after a restart.\n\n> I don't really know a good wording here, but maybe a hint like \"The server or session will not be able to start if it has been configured to use libraries that cannot be loaded.\"?\n\npostgres=# ALTER SYSTEM SET shared_preload_libraries =a,b;\nWARNING: could not access file \"$libdir/plugins/a\"\nHINT: The server will fail to start with the existing configuration. If it is is shut down, it will be necessary to manually edit the postgresql.auto.conf file to allow it to start.\nWARNING: could not access file \"$libdir/plugins/b\"\nHINT: The server will fail to start with the existing configuration. If it is is shut down, it will be necessary to manually edit the postgresql.auto.conf file to allow it to start.\nALTER SYSTEM\npostgres=# ALTER SYSTEM SET session_preload_libraries =c,d;\nWARNING: could not access file \"$libdir/plugins/c\"\nHINT: New sessions will fail with the existing configuration.\nWARNING: could not access file \"$libdir/plugins/d\"\nHINT: New sessions will fail with the existing configuration.\nALTER SYSTEM\n\n$ ./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data -clogging_collector=on\n2022-02-09 19:53:48.034 CST postmaster[30979] FATAL: could not access file \"a\": No such file or directory\n2022-02-09 19:53:48.034 CST postmaster[30979] CONTEXT: while loading shared libraries for setting \"shared_preload_libraries\"\n from /home/pryzbyj/src/postgres/src/test/regress/tmp_check/data/postgresql.auto.conf:3\n2022-02-09 19:53:48.034 CST postmaster[30979] LOG: database system is shut down\n\nMaybe it's enough to show the GucSource rather than file:line...\n\n-- \nJustin", "msg_date": "Wed, 9 Feb 2022 19:58:54 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Wed, Feb 9, 2022 at 5:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> FYI, it has said \"while...\" and hasn't said \"guc\" since the 2nd revision of the\n> patch.\n\nThe v3-0001 attached above had \"while... for GUC...\"--sorry I wasn't clear.\n\nIn v4, the message looks fine to me for shared_preload_libraries\n(except there is a doubled \"is\"). However, I also get the message for\na simple SET with local_preload_libraries:\n\npostgres=# set local_preload_libraries=xyz;\nWARNING: could not access file \"xyz\"\nHINT: The server will fail to start with the existing configuration.\nIf it is is shut down, it will be necessary to manually edit the\npostgresql.auto.conf file to allow it to start.\nSET\n\nI'm not familiar with that setting (reading the docs, it's like a\nnon-superuser session_preload_libraries for compatible modules?), but\ngiven nothing is being persisted here with ALTER SYSTEM, this seems\nincorrect.\n\nChanging session_preload_libraries emits a similar warning:\n\npostgres=# set session_preload_libraries = foo;\nWARNING: could not access file \"$libdir/plugins/foo\"\nHINT: New sessions will fail with the existing configuration.\nSET\n\nThis is also not persisted, so I think this is also incorrect, right?\n(I'm not sure what setting session_preload_libraries without an ALTER\nROLE or ALTER DATABASE accomplishes, given a new session is required\nfor the change to take effect, but I thought I'd point this out.) I'm\nguessing this may be due to trying to have the warning for ALTER ROLE?\n\npostgres=# alter role bob set session_preload_libraries = foo;\nWARNING: could not access file \"$libdir/plugins/foo\"\nHINT: New sessions will fail with the existing configuration.\nALTER ROLE\n\nThis is great. Ideally, we'd qualify this with \"New sessions for\nuser...\" or \"New sessions for database...\" but given you get the\nwarning right after running the relevant command, maybe that's clear\nenough.\n\n> $ ./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data -clogging_collector=on\n> 2022-02-09 19:53:48.034 CST postmaster[30979] FATAL: could not access file \"a\": No such file or directory\n> 2022-02-09 19:53:48.034 CST postmaster[30979] CONTEXT: while loading shared libraries for setting \"shared_preload_libraries\"\n> from /home/pryzbyj/src/postgres/src/test/regress/tmp_check/data/postgresql.auto.conf:3\n> 2022-02-09 19:53:48.034 CST postmaster[30979] LOG: database system is shut down\n>\n> Maybe it's enough to show the GucSource rather than file:line...\n\nThis is great. I think the file:line output is helpful here.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Mon, 14 Feb 2022 10:12:22 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> In v4, the message looks fine to me for shared_preload_libraries\n> (except there is a doubled \"is\"). However, I also get the message for\n> a simple SET with local_preload_libraries:\n\n> postgres=# set local_preload_libraries=xyz;\n> WARNING: could not access file \"xyz\"\n> HINT: The server will fail to start with the existing configuration.\n> If it is is shut down, it will be necessary to manually edit the\n> postgresql.auto.conf file to allow it to start.\n> SET\n\nI agree with Maciek's concerns about these HINTs being emitted\ninappropriately, but I also have a stylistic gripe: they're only\nhalfway hints. Given that we fix things so they only print when they\nshould, the complaint about the server not starting is not a hint,\nit's a fact, which per style guidelines means it should be errdetail.\nSo I think this ought to look more like\n\nWARNING: could not access file \"xyz\"\nDETAIL: The server will fail to start with this setting.\nHINT: If the server is shut down, it will be necessary to manually edit the\npostgresql.auto.conf file to allow it to start again.\n\nI adjusted the wording a bit too --- YMMV, but I think my text is clearer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 15:02:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Wed, Mar 23, 2022 at 03:02:23PM -0400, Tom Lane wrote:\n> I agree with Maciek's concerns about these HINTs being emitted\n> inappropriately, but I also have a stylistic gripe: they're only\n> halfway hints. Given that we fix things so they only print when they\n> should, the complaint about the server not starting is not a hint,\n> it's a fact, which per style guidelines means it should be errdetail.\n> So I think this ought to look more like\n> \n> WARNING: could not access file \"xyz\"\n> DETAIL: The server will fail to start with this setting.\n> HINT: If the server is shut down, it will be necessary to manually edit the\n> postgresql.auto.conf file to allow it to start again.\n> \n> I adjusted the wording a bit too --- YMMV, but I think my text is clearer.\n\nIt seems to me that there is no objection to the proposed patch, but\nan update is required. Justin?\n--\nMichael", "msg_date": "Thu, 16 Jun 2022 12:14:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "I've started to think that we should really WARN whenever a (set of) GUC is set\nin a manner that the server will fail to start - not just for shared libraries.\n\nIn particular, I think the server should also warn if it's going to fail to\nstart like this:\n\n2022-06-15 22:48:34.279 CDT postmaster[20782] FATAL: WAL streaming (max_wal_senders > 0) requires wal_level \"replica\" or \"logical\"\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Jun 2022 22:50:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Finally returning to this .. rebased and updated per feedback.\n\nI'm not sure of a good place to put test cases for this..", "msg_date": "Thu, 21 Jul 2022 20:54:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Thanks for picking this back up, Justin.\n\n>I've started to think that we should really WARN whenever a (set of) GUC is set\n>in a manner that the server will fail to start - not just for shared libraries.\n\n+0.5. I think it's a reasonable change, but I've never broken my\nserver with anything other than shared_preload_libraries, so I'd\nrather see an improvement here first rather than expanding scope. I\nthink shared_preload_libraries (and friends) is especially tricky due\nto the syntax, and more likely to lead to problems.\n\nOn the update patch itself, I have some minor feedback about message wording\n\npostgres=# set local_preload_libraries=xyz;\nSET\n\nGreat, it's nice that this no longer gives a warning.\n\npostgres=# alter role bob set local_preload_libraries = xyz;\nWARNING: could not access file \"xyz\"\nDETAIL: New sessions will currently fail to connect with the new setting.\nALTER ROLE\n\nThe warning makes sense, but the detail feels a little awkward. I\nthink \"currently\" is sort of redundant with \"new setting\". And it\ncould be clearer that the setting did in fact take effect (I know the\nALTER ROLE command tag echo tells you that, but we could reinforce\nthat in the warning).\n\nAlso, I know I said last time that the scope of the warning is clear\nfrom the setting, but looking at it again, I think we could do better.\nI guess because when we're generating the error, we don't know whether\nthe source was ALTER DATABASE or ALTER ROLE, we can't give a more\nspecific message? Ideally, I think the DETAIL would be something like\n\"New sessions for this role will fail to connect due to this setting\".\nMaybe even with a HINT of \"Run ALTER ROLE again with a valid value to\nfix this\"? If that's not feasible, maybe \"New sessions for this role\nor database will fail to connect due to this setting\"? That message is\nnot as clear about the impact of the change as it could be, but\nhopefully you know what command you just ran, so that should make it\nunambiguous. I do think without qualifying that, it suggests that all\nnew sessions are affected.\n\nHmm, or maybe just \"New sessions affected by this setting will fail to\nconnect\"? That also makes the scope clear without the warning having\nto be aware of the scope: if you just ran ALTER DATABASE it's pretty\nclear that what is affected by the setting is the database. I think\nthis is probably the way to go, but leaving my thought process above\nfor context.\n\npostgres=# alter system set shared_preload_libraries = lol;\nWARNING: could not access file \"$libdir/plugins/lol\"\nDETAIL: The server will currently fail to start with this setting.\nHINT: If the server is shut down, it will be necessary to manually\nedit the postgresql.auto.conf file to allow it to start again.\nALTER SYSTEM\n\nI think this works. Tom's copy edit above omitted \"currently\" from the\nDETAIL and did not include the \"$libdir/plugins/\" prefix. I don't feel\nstrongly about these either way.\n\n2022-07-22 10:37:50.217 PDT [1131187] LOG: database system is shut down\n2022-07-22 10:37:50.306 PDT [1134058] WARNING: could not access file\n\"$libdir/plugins/lol\"\n2022-07-22 10:37:50.306 PDT [1134058] DETAIL: The server will\ncurrently fail to start with this setting.\n2022-07-22 10:37:50.306 PDT [1134058] HINT: If the server is shut\ndown, it will be necessary to manually edit the postgresql.auto.conf\nfile to allow it to start again.\n2022-07-22 10:37:50.312 PDT [1134058] FATAL: could not access file\n\"lol\": No such file or directory\n2022-07-22 10:37:50.312 PDT [1134058] CONTEXT: while loading shared\nlibraries for setting \"shared_preload_libraries\"\nfrom /home/maciek/code/aux/postgres/tmpdb/postgresql.auto.conf:3\n2022-07-22 10:37:50.312 PDT [1134058] LOG: database system is shut down\n\nHmm, I guess this is a side effect of where these messages are\nemitted, but at this point, lines 4-6 here are a little confusing, no?\nThe server was already shut down, and we're trying to start it back\nup. If there's no reasonable way to avoid that output, I think it's\nokay, but it'd be better to skip it (or adjust it to the new context).\n\nThanks,\nMaciek\n\n\n", "msg_date": "Fri, 22 Jul 2022 10:42:11 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n>> I've started to think that we should really WARN whenever a (set of) GUC is set\n>> in a manner that the server will fail to start - not just for shared libraries.\n\n> +0.5. I think it's a reasonable change, but I've never broken my\n> server with anything other than shared_preload_libraries, so I'd\n> rather see an improvement here first rather than expanding scope.\n\nGenerally speaking, anything that tries to check a combination of\nGUC settings is going to be so fragile as to be worthless. We've\nlearned that lesson the hard way in the past.\n\n> 2022-07-22 10:37:50.217 PDT [1131187] LOG: database system is shut down\n> 2022-07-22 10:37:50.306 PDT [1134058] WARNING: could not access file\n> \"$libdir/plugins/lol\"\n> 2022-07-22 10:37:50.306 PDT [1134058] DETAIL: The server will\n> currently fail to start with this setting.\n> 2022-07-22 10:37:50.306 PDT [1134058] HINT: If the server is shut\n> down, it will be necessary to manually edit the postgresql.auto.conf\n> file to allow it to start again.\n> 2022-07-22 10:37:50.312 PDT [1134058] FATAL: could not access file\n> \"lol\": No such file or directory\n> 2022-07-22 10:37:50.312 PDT [1134058] CONTEXT: while loading shared\n> libraries for setting \"shared_preload_libraries\"\n> from /home/maciek/code/aux/postgres/tmpdb/postgresql.auto.conf:3\n> 2022-07-22 10:37:50.312 PDT [1134058] LOG: database system is shut down\n\n> Hmm, I guess this is a side effect of where these messages are\n> emitted, but at this point, lines 4-6 here are a little confusing, no?\n\nThis indicates that the warning is being issued in the wrong place.\nIt's okay if it comes out during ALTER SYSTEM. It's not okay if it\ncomes out during server start; then it's just an annoyance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 13:53:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Jul 22, 2022 at 01:53:21PM -0400, Tom Lane wrote:\n> > 2022-07-22 10:37:50.217 PDT [1131187] LOG: database system is shut down\n> > 2022-07-22 10:37:50.306 PDT [1134058] WARNING: could not access file \"$libdir/plugins/lol\"\n> > 2022-07-22 10:37:50.306 PDT [1134058] DETAIL: The server will currently fail to start with this setting.\n> > 2022-07-22 10:37:50.306 PDT [1134058] HINT: If the server is shut down, it will be necessary to manually edit the postgresql.auto.conf file to allow it to start again.\n> > 2022-07-22 10:37:50.312 PDT [1134058] FATAL: could not access file \"lol\": No such file or directory\n> > 2022-07-22 10:37:50.312 PDT [1134058] CONTEXT: while loading shared libraries for setting \"shared_preload_libraries\" from /home/maciek/code/aux/postgres/tmpdb/postgresql.auto.conf:3\n> > 2022-07-22 10:37:50.312 PDT [1134058] LOG: database system is shut down\n> \n> > Hmm, I guess this is a side effect of where these messages are\n> > emitted, but at this point, lines 4-6 here are a little confusing, no?\n> \n> This indicates that the warning is being issued in the wrong place.\n> It's okay if it comes out during ALTER SYSTEM. It's not okay if it\n> comes out during server start; then it's just an annoyance.\n\nThis was a regression from the previous patch version, and I even noticed the\nproblem, but then forgot when returning to the patch :(\n\nThe previous patch version checked if (!IsUnderPostmaster()) before warning.\nIs there a better way ?\n\nALTER SYSTEM uses PGC_S_FILE, the same as during startup..\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 22 Jul 2022 13:35:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 22, 2022 at 01:53:21PM -0400, Tom Lane wrote:\n>> This indicates that the warning is being issued in the wrong place.\n>> It's okay if it comes out during ALTER SYSTEM. It's not okay if it\n>> comes out during server start; then it's just an annoyance.\n\n> The previous patch version checked if (!IsUnderPostmaster()) before warning.\n> Is there a better way ?\n\n> ALTER SYSTEM uses PGC_S_FILE, the same as during startup..\n\nShouldn't you be doing this when the source is PGC_S_TEST, instead?\nThat's pretty much what it's for. See check_default_table_access_method\nand other examples.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 15:00:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Jul 22, 2022 at 03:00:23PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Jul 22, 2022 at 01:53:21PM -0400, Tom Lane wrote:\n> >> This indicates that the warning is being issued in the wrong place.\n> >> It's okay if it comes out during ALTER SYSTEM. It's not okay if it\n> >> comes out during server start; then it's just an annoyance.\n> \n> > The previous patch version checked if (!IsUnderPostmaster()) before warning.\n> > Is there a better way ?\n> \n> > ALTER SYSTEM uses PGC_S_FILE, the same as during startup..\n> \n> Shouldn't you be doing this when the source is PGC_S_TEST, instead?\n> That's pretty much what it's for. See check_default_table_access_method\n> and other examples.\n\nThat makes sense, but it doesn't work for ALTER SYSTEM, which uses PGC_S_FILE.\n\npostgres=# ALTER SYSTEM SET shared_preload_libraries =a;\n2022-07-22 14:07:25.489 CDT client backend[23623] psql WARNING: source 3\nWARNING: source 3\n2022-07-22 14:07:25.489 CDT client backend[23623] psql WARNING: could not access file \"$libdir/plugins/a\"\n2022-07-22 14:07:25.489 CDT client backend[23623] psql DETAIL: The server will currently fail to start with this setting.\n2022-07-22 14:07:25.489 CDT client backend[23623] psql HINT: If the server is shut down, it will be necessary to manually edit the postgresql.auto.conf file to allow it to start again.\n\npostgres=# ALTER SYSTEM SET default_table_access_method=abc;\nBreakpoint 1, check_default_table_access_method (newval=0x7ffe4c6fe820, extra=0x7ffe4c6fe828, source=PGC_S_FILE) at tableamapi.c:112\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 22 Jul 2022 14:14:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 22, 2022 at 03:00:23PM -0400, Tom Lane wrote:\n>> Shouldn't you be doing this when the source is PGC_S_TEST, instead?\n\n> That makes sense, but it doesn't work for ALTER SYSTEM, which uses PGC_S_FILE.\n\nHmph. I wonder if we shouldn't change that, because it's a lie.\nThe value isn't actually coming from the config file, at least\nnot yet.\n\nWe might need to invent a separate PGC_S_TEST_FILE value; or maybe it'd\nbe better to pass the \"this is a test\" flag separately? But that'd\nrequire changing the signature of all GUC check hooks, so probably\nit's unduly invasive. I'm not sure whether any users of the TEST\ncapability need to distinguish values proposed for postgresql.auto.conf\nfrom those proposed for pg_db_role_setting ... but I guess it's\nplausible that somebody might.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 15:26:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Jul 22, 2022 at 03:26:47PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Jul 22, 2022 at 03:00:23PM -0400, Tom Lane wrote:\n> >> Shouldn't you be doing this when the source is PGC_S_TEST, instead?\n> \n> > That makes sense, but it doesn't work for ALTER SYSTEM, which uses PGC_S_FILE.\n> \n> Hmph. I wonder if we shouldn't change that, because it's a lie.\n\nI think so, and I was going to raise this question some months ago when\nI first picked up the patch.\n\nThe question is, which behavior do we want ?\n\npostgres=# ALTER SYSTEM SET default_table_access_method=abc;\n2022-07-22 15:24:55.445 CDT client backend[27938] psql ERROR: invalid value for parameter \"default_table_access_method\": \"abc\"\n2022-07-22 15:24:55.445 CDT client backend[27938] psql DETAIL: Table access method \"abc\" does not exist.\n2022-07-22 15:24:55.445 CDT client backend[27938] psql STATEMENT: ALTER SYSTEM SET default_table_access_method=abc;\n\nThat behavior differs from ALTER SYSTEM SET shared_preload_libraries,\nwhich supports first seting the GUC and then installing the library. If\nthat wasn't supported, I think we'd just throw an error and avoid the\npossibility that the server can't start.\n\nIt caused no issue when I changed:\n\n /* Check that it's acceptable for the indicated parameter */\n if (!parse_and_validate_value(record, name, value,\n- PGC_S_FILE, ERROR,\n+ PGC_S_TEST, ERROR,\n &newval, &newextra))\n\nI'm not sure where to go from here.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 2 Sep 2022 17:24:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> I'm not sure where to go from here.\n\nNot sure either, but the thread has no activity for a bit more than 1\nmonth, so marked as RwF for now.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 14:34:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> It caused no issue when I changed:\n> \n> /* Check that it's acceptable for the indicated parameter */\n> if (!parse_and_validate_value(record, name, value,\n> - PGC_S_FILE, ERROR,\n> + PGC_S_TEST, ERROR,\n> &newval, &newextra))\n> \n> I'm not sure where to go from here.\n\nI'm hoping for some guidance ; this simple change may be naive, but I'm not\nsure what a wider change would look like.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 29 Oct 2022 12:40:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > It caused no issue when I changed:\n> >\n> > /* Check that it's acceptable for the indicated parameter */\n> > if (!parse_and_validate_value(record, name, value,\n> > - PGC_S_FILE, ERROR,\n> > + PGC_S_TEST, ERROR,\n> > &newval, &newextra))\n> >\n> > I'm not sure where to go from here.\n>\n> I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> sure what a wider change would look like.\n\nI assume you mean guidance on implementation details here, and not on\ndesign. I still think this is a useful patch and I'd be happy to\nreview and try out future iterations, but I don't have any useful\ninput here.\n\nAlso, for what it's worth, I think requiring the libraries to be in\nplace before running ALTER SYSTEM does not really seem that onerous. I\ncan't really think of use cases it precludes.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sun, 30 Oct 2022 16:12:33 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sun, Oct 30, 2022 at 04:12:33PM -0700, Maciek Sakrejda wrote:\n> On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > > It caused no issue when I changed:\n> > >\n> > > /* Check that it's acceptable for the indicated parameter */\n> > > if (!parse_and_validate_value(record, name, value,\n> > > - PGC_S_FILE, ERROR,\n> > > + PGC_S_TEST, ERROR,\n> > > &newval, &newextra))\n> > >\n> > > I'm not sure where to go from here.\n> >\n> > I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> > sure what a wider change would look like.\n> \n> I assume you mean guidance on implementation details here, and not on\n\nALTER SYSTEM tests the new/proposed setting using PGC_S_FILE (\"which is\na lie\").\n\nIt seems better to address that lie before attempting to change the\nbehavior of *_preload_libraries.\n\nPGC_S_TEST is a better fit, so my question is whether it's really that\nsimple ? \n\n> Also, for what it's worth, I think requiring the libraries to be in\n> place before running ALTER SYSTEM does not really seem that onerous. I\n> can't really think of use cases it precludes.\n\nRight now, it's allowed to set the GUC before installing the shlib.\nThat's a supported case (see the 11 month old messages toward the\nbeginning of this thread).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 31 Oct 2022 08:31:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Oct 30, 2022 at 04:12:33PM -0700, Maciek Sakrejda wrote:\n>> Also, for what it's worth, I think requiring the libraries to be in\n>> place before running ALTER SYSTEM does not really seem that onerous. I\n>> can't really think of use cases it precludes.\n\n> Right now, it's allowed to set the GUC before installing the shlib.\n> That's a supported case (see the 11 month old messages toward the\n> beginning of this thread).\n\nYeah, I am afraid that you will break assorted dump/restore and\npg_upgrade scenarios if you insist on that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Oct 2022 09:43:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Mon, Oct 31, 2022 at 08:31:20AM -0500, Justin Pryzby wrote:\n> On Sun, Oct 30, 2022 at 04:12:33PM -0700, Maciek Sakrejda wrote:\n> > On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > > > It caused no issue when I changed:\n> > > >\n> > > > /* Check that it's acceptable for the indicated parameter */\n> > > > if (!parse_and_validate_value(record, name, value,\n> > > > - PGC_S_FILE, ERROR,\n> > > > + PGC_S_TEST, ERROR,\n> > > > &newval, &newextra))\n> > > >\n> > > > I'm not sure where to go from here.\n> > >\n> > > I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> > > sure what a wider change would look like.\n> > \n> > I assume you mean guidance on implementation details here, and not on\n> \n> ALTER SYSTEM tests the new/proposed setting using PGC_S_FILE (\"which is\n> a lie\").\n> \n> It seems better to address that lie before attempting to change the\n> behavior of *_preload_libraries.\n> \n> PGC_S_TEST is a better fit, so my question is whether it's really that\n> simple ? \n\nI've added the trivial change as 0001 and re-opened the patch (which ended\nup in January's CF)\n\nIf for some reason it's not really as simple as that, then 001 will\nserve as a \"straw-man patch\" hoping to elicit discussion on that point.\n\n-- \nJustin", "msg_date": "Tue, 1 Nov 2022 17:26:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > > > > It caused no issue when I changed:\n> > > > >\n> > > > > /* Check that it's acceptable for the indicated parameter */\n> > > > > if (!parse_and_validate_value(record, name, value,\n> > > > > - PGC_S_FILE, ERROR,\n> > > > > + PGC_S_TEST, ERROR,\n> > > > > &newval, &newextra))\n> > > > >\n> > > > > I'm not sure where to go from here.\n> > > >\n> > > > I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> > > > sure what a wider change would look like.\n\nI'm still hoping.\n\n> > PGC_S_TEST is a better fit, so my question is whether it's really that\n> > simple ? \n> \n> I've added the trivial change as 0001 and re-opened the patch (which ended\n> up in January's CF)\n> \n> If for some reason it's not really as simple as that, then 001 will\n> serve as a \"straw-man patch\" hoping to elicit discussion on that point.\n\n> From defdb57fe0ec373c1eea8df42f0e1831b3f9c3cc Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 22 Jul 2022 15:52:11 -0500\n> Subject: [PATCH v6 1/4] WIP: test GUCs from ALTER SYSTEM as PGC_S_TEST not\n> FILE\n> \n> WIP: ALTER SYSTEM should use PGC_S_TEST rather than PGC_S_FILE\n> \n> Since the value didn't come from a file. Or maybe we should have\n> another PGC_S_ value for this, or a flag for 'is a test'.\n> ---\n> src/backend/utils/misc/guc.c | 2 +-\n> src/include/utils/guc.h | 1 +\n> 2 files changed, 2 insertions(+), 1 deletion(-)\n> \n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index 6f21752b844..ae8810591d6 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -4435,7 +4435,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)\n> \n> \t\t\t/* Check that it's acceptable for the indicated parameter */\n> \t\t\tif (!parse_and_validate_value(record, name, value,\n> -\t\t\t\t\t\t\t\t\t\t PGC_S_FILE, ERROR,\n> +\t\t\t\t\t\t\t\t\t\t PGC_S_TEST, ERROR,\n> \t\t\t\t\t\t\t\t\t\t &newval, &newextra))\n> \t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\nThis is rebased over my own patch to enable checks for\nREGRESSION_TEST_NAME_RESTRICTIONS.\n\n-- \nJustin", "msg_date": "Thu, 6 Jul 2023 15:15:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Thu, Dec 28, 2023 at 10:54 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > > > > > It caused no issue when I changed:\n> > > > > >\n> > > > > > /* Check that it's acceptable for the indicated parameter */\n> > > > > > if (!parse_and_validate_value(record, name, value,\n> > > > > > - PGC_S_FILE, ERROR,\n> > > > > > + PGC_S_TEST, ERROR,\n> > > > > > &newval, &newextra))\n> > > > > >\n> > > > > > I'm not sure where to go from here.\n> > > > >\n> > > > > I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> > > > > sure what a wider change would look like.\n>\n> I'm still hoping.\n>\n> > > PGC_S_TEST is a better fit, so my question is whether it's really that\n> > > simple ?\n> >\n> > I've added the trivial change as 0001 and re-opened the patch (which ended\n> > up in January's CF)\n> >\n> > If for some reason it's not really as simple as that, then 001 will\n> > serve as a \"straw-man patch\" hoping to elicit discussion on that point.\n>\n> > From defdb57fe0ec373c1eea8df42f0e1831b3f9c3cc Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Fri, 22 Jul 2022 15:52:11 -0500\n> > Subject: [PATCH v6 1/4] WIP: test GUCs from ALTER SYSTEM as PGC_S_TEST not\n> > FILE\n> >\n> > WIP: ALTER SYSTEM should use PGC_S_TEST rather than PGC_S_FILE\n> >\n> > Since the value didn't come from a file. Or maybe we should have\n> > another PGC_S_ value for this, or a flag for 'is a test'.\n> > ---\n> > src/backend/utils/misc/guc.c | 2 +-\n> > src/include/utils/guc.h | 1 +\n> > 2 files changed, 2 insertions(+), 1 deletion(-)\n> >\n> > diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> > index 6f21752b844..ae8810591d6 100644\n> > --- a/src/backend/utils/misc/guc.c\n> > +++ b/src/backend/utils/misc/guc.c\n> > @@ -4435,7 +4435,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt)\n> >\n> > /* Check that it's acceptable for the indicated parameter */\n> > if (!parse_and_validate_value(record, name, value,\n> > - PGC_S_FILE, ERROR,\n> > + PGC_S_TEST, ERROR,\n> > &newval, &newextra))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>\n> This is rebased over my own patch to enable checks for\n> REGRESSION_TEST_NAME_RESTRICTIONS.\n>\nI was reviewing the Patch and came across a minor issue that the Patch\ndoes not apply on the current Head. Please provide the updated version\nof the patch.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 28 Dec 2023 10:56:53 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Thu, Dec 28, 2023 at 12:27 PM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> I was reviewing the Patch and came across a minor issue that the Patch\n> does not apply on the current Head. Please provide the updated version\n> of the patch.\n\nFor your information, the commitfest manager has the ability to send\nprivate messages to authors about procedural issues like this. There\nis no need to tell the whole list about it.\n\n\n", "msg_date": "Mon, 1 Jan 2024 19:16:02 +0700", "msg_from": "John Naylor <johncnaylorls@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, Jul 22, 2022 at 03:26:47PM -0400, Tom Lane wrote:\n> Hmph. I wonder if we shouldn't change that, because it's a lie.\n> The value isn't actually coming from the config file, at least\n> not yet.\n\nOn Thu, Jul 06, 2023 at 03:15:20PM -0500, Justin Pryzby wrote:\n> On Sat, Oct 29, 2022 at 10:40 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > On Fri, Sep 02, 2022 at 05:24:58PM -0500, Justin Pryzby wrote:\n> > > > > > It caused no issue when I changed:\n> > > > > >\n> > > > > > /* Check that it's acceptable for the indicated parameter */\n> > > > > > if (!parse_and_validate_value(record, name, value,\n> > > > > > - PGC_S_FILE, ERROR,\n> > > > > > + PGC_S_TEST, ERROR,\n> > > > > > &newval, &newextra))\n> > > > > >\n> > > > > > I'm not sure where to go from here.\n> > > > >\n> > > > > I'm hoping for some guidance ; this simple change may be naive, but I'm not\n> > > > > sure what a wider change would look like.\n> \n> I'm still hoping.\n\n@cfbot: rebased", "msg_date": "Sun, 7 Jan 2024 07:27:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Thu, Jul 6, 2023 at 4:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm still hoping.\n\nHi,\n\nI got asked to take a look at this thread.\n\nFirst, I want to explain why I think this thread hasn't gotten as much\nfeedback as Justin was hoping. It is always possible for any thread to\nhave that problem just because people are busy or not interested.\nHowever, I think in this case an aggravating factor is that the\ndiscussion is very \"high context\"; it's hard to understand what the\nopen problems are without reading a lot of emails and understanding\nhow they all relate to each other. One of the key questions is whether\nwe should replace PGC_S_FILE with PGC_S_TEST in\nAlterSystemSetConfigFile. I originally thought, based on reading one\nof the emails, that the question was whether we should do that out of\nsome sense of intellectual purity, and my answer was \"probably not,\nbecause that would change the behavior in a way that doesn't seem\ngood.\" But then I realized, reading another email, that Justin already\nknew that the behavior would change, or at least I'm 90% certain that\nhe knows that. So now I think the question is whether we want that\nbehavior change, but he only provided one example of how the behavior\nchanges, and it's not clear how many other scenarios are affected or\nin what way, so it's still a bit hard to answer. Plus, it took me 10\nminutes to figure out what the question was. I think that if the\nquestion had been phrased in a way that was easily understandable to\nany experienced PostgreSQL user, it's a lot more likely that one or\nmore people would have had an opinion on whether it was good or bad.\nAs it is, I think most people probably didn't understand the question,\nand the people who did understand the question may not have wanted to\nspend the time to do the research that they would have needed to do to\ncome up with an intelligent answer. I'm not saying any of this to\ncriticize Justin or to say that he did anything wrong, but I think we\nhave lots of examples of stuff like this on the mailing list, where\npeople are sad because they didn't get an answer, but don't always\nrealize that there might be things they could do to improve their\nchances.\n\nOn the behavior change itself, it seems to me that there's a big\ndifference between shared_preload_libraries=bogus and work_mem=bogus.\nThe former is valid or invalid according to whether bogus.so exists in\nan appropriate directory on the local machine, but the latter is\ncategorically invalid. I'm not sure to what degree we have the\ninfrastructure to distinguish those cases, but to the extent that we\ndo, handling them differently is completely defensible. It's\nreasonable to allow the first one on the theory that the\npresently-invalid configuration may at a later time become valid, but\nthat's not reasonable in the second case. So if changing PGC_S_FILE to\nPGC_S_TEST in AlterSystemSetConfigFile is going to have the effect of\nallowing garbage values into postgresql.auto.conf that would currently\nget blocked, I think that's a bad plan and we shouldn't do it. But\nit's quite possible I'm not fully understanding the situation.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 09:26:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, May 24, 2024 at 09:26:54AM -0400, Robert Haas wrote:\n> On Thu, Jul 6, 2023 at 4:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> But then I realized, reading another email, that Justin already knew\n> that the behavior would change, or at least I'm 90% certain that he\n> knows that.\n\nYou give me too much credit..\n\n> On the behavior change itself, it seems to me that there's a big\n> difference between shared_preload_libraries=bogus and work_mem=bogus.\n..\n> So if changing PGC_S_FILE to\n> PGC_S_TEST in AlterSystemSetConfigFile is going to have the effect of\n> allowing garbage values into postgresql.auto.conf that would currently\n> get blocked, I think that's a bad plan and we shouldn't do it.\n\nRight - this is something I'd failed to realize. We can't change it in\nthe naive way because it allows bogus values, and not just missing\nlibraries. Specifically, for GUCs with assign hooks conditional on\nPGC_TEST.\n\nWe don't want to change the behavior to allow this to succeed -- it\nwould allow leaving the server in a state that it fails to start (rather\nthan helping to avoid doing so, as intended by this thread).\n\nregression=# ALTER SYSTEM SET default_table_access_method=abc;\nNOTICE: table access method \"abc\" does not exist\nALTER SYSTEM\n\nMaybe there should be a comment explaning why PGC_FILE is used, and\nmaybe there should be a TAP test for the behavior, but they're pretty\nunrelated to this thread. So, I've dropped the 001 patch. \n\n-- \nJustin", "msg_date": "Fri, 24 May 2024 10:48:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, May 24, 2024 at 11:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> You give me too much credit..\n\nGee, usually I'm very good at avoiding that mistake. :-)\n\n> We don't want to change the behavior to allow this to succeed -- it\n> would allow leaving the server in a state that it fails to start (rather\n> than helping to avoid doing so, as intended by this thread).\n\n+1.\n\n> Maybe there should be a comment explaning why PGC_FILE is used, and\n> maybe there should be a TAP test for the behavior, but they're pretty\n> unrelated to this thread. So, I've dropped the 001 patch.\n\n+1 for that, too.\n\n+ /* Note that filename was already canonicalized */\n\nI see that this comment is copied from load_libraries(), but I don't\nimmediately see where the canonicalization actually happens. Do you\nknow, or can you find out? Because that's crucial here, else stat()\nmight not target the real filename. I wonder if it will anyway. Like,\ncouldn't the library be versioned, and might not dlopen() try a few\npossibilities?\n\n+ errdetail(\"The server will currently fail to start with this setting.\"),\n+ errdetail(\"New sessions will currently fail to connect with the new\nsetting.\"));\n\nI understand why these messages have the word \"currently\" in them, but\nI bet the user won't. I'm not sure exactly what to recommend at the\nmoment (and I'm quite busy today due to the conference upcoming) but I\nthink we should try to find some way to rephrase these.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 13:15:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warn if GUC set to an invalid shared library" }, { "msg_contents": "On Fri, May 24, 2024 at 01:15:13PM -0400, Robert Haas wrote:\n> + /* Note that filename was already canonicalized */\n> \n> I see that this comment is copied from load_libraries(), but I don't\n> immediately see where the canonicalization actually happens. Do you\n> know, or can you find out? Because that's crucial here, else stat()\n> might not target the real filename. I wonder if it will anyway. Like,\n> couldn't the library be versioned, and might not dlopen() try a few\n> possibilities?\n\nThis comment made me realize that we've been fixated on the warning.\nBut the patch was broken, and would've always warned. I think almost\nall of the previous patch versions had this issue - oops.\n\nI added a call to expand_dynamic_library_name(), which seems to answer\nyour question.\n\nAnd added a preparatory patch to distinguish ALTER USER/DATABASE SET\nfrom SET in a function, to avoid warning in that case.\n\n-- \nJustin", "msg_date": "Mon, 22 Jul 2024 11:28:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: warn if GUC set to an invalid shared library" } ]
[ { "msg_contents": "Hi pgsql hackers, I was testing the new psql command \\getenv introduced on commit 33d3eeadb2 and from a user perspective, I think that would be nice if the PSQLVAR parameter were optional, therefore when it is only necessary to view the value of the environment variable, the user just run \\getenv, for example:\n\n\\getenv PATH\n/usr/local/sbin:/usr/local/bin:/usr/bin\n\nAnd when it is necessary to assign the environment variable in a variable, the user could execute like this:\n\n\\getenv PATH myvar\n\\echo :myvar\n/usr/local/sbin:/usr/local/bin:/usr/bin\n\nFor this flexibility the order of parameters would need to be reversed, instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.\n\nWhat do you guys think? I'm not a C expert but if this proposal is interesting I can write a patch.\n\nThis is my first time sending an email here, so let me know if I doing something wrong.\nHi pgsql hackers, I was testing the new psql command \\getenv introduced on commit 33d3eeadb2 and from a user perspective, I think that would be nice if the PSQLVAR parameter were optional, therefore when it is only necessary to view the value of the environment variable, the user just run \\getenv, for example:\\getenv PATH/usr/local/sbin:/usr/local/bin:/usr/binAnd when it is necessary to assign the environment variable in a variable, the user could execute like this:\\getenv PATH myvar\\echo :myvar/usr/local/sbin:/usr/local/bin:/usr/binFor this flexibility the order of parameters would need to be reversed, instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.What do you guys think? I'm not a C expert but if this proposal is interesting I can write a patch.This is my first time sending an email here, so let me know if I doing something wrong.", "msg_date": "Tue, 28 Dec 2021 18:51:26 +0000", "msg_from": "Matheus Alcantara <msalcantara.dev@pm.me>", "msg_from_op": true, "msg_subject": "[PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "út 28. 12. 2021 v 19:51 odesílatel Matheus Alcantara <msalcantara.dev@pm.me>\nnapsal:\n\n> Hi pgsql hackers, I was testing the new psql command \\getenv introduced on\n> commit 33d3eeadb2 and from a user perspective, I think that would be nice\n> if the PSQLVAR parameter were optional, therefore when it is only necessary\n> to view the value of the environment variable, the user just run \\getenv,\n> for example:\n>\n> \\getenv PATH\n> /usr/local/sbin:/usr/local/bin:/usr/bin\n>\n> And when it is necessary to assign the environment variable in a variable,\n> the user could execute like this:\n>\n> \\getenv PATH myvar\n> \\echo :myvar\n> /usr/local/sbin:/usr/local/bin:/usr/bin\n>\n> For this flexibility the order of parameters would need to be reversed,\n> instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.\n>\n> What do you guys think? I'm not a C expert but if this proposal is\n> interesting I can write a patch.\n>\n\nit is not consistent with other \\g* commands. Maybe a new statement \\senv\n? But what is the use case? You can just press ^z and inside shell write\necho $xxx, and then fg\n\nRegards\n\nPavel\n\n\n> This is my first time sending an email here, so let me know if I doing\n> something wrong.\n>\n>\n>\n>\n\nút 28. 12. 2021 v 19:51 odesílatel Matheus Alcantara <msalcantara.dev@pm.me> napsal:Hi pgsql hackers, I was testing the new psql command \\getenv introduced on commit 33d3eeadb2 and from a user perspective, I think that would be nice if the PSQLVAR parameter were optional, therefore when it is only necessary to view the value of the environment variable, the user just run \\getenv, for example:\\getenv PATH/usr/local/sbin:/usr/local/bin:/usr/binAnd when it is necessary to assign the environment variable in a variable, the user could execute like this:\\getenv PATH myvar\\echo :myvar/usr/local/sbin:/usr/local/bin:/usr/binFor this flexibility the order of parameters would need to be reversed, instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.What do you guys think? I'm not a C expert but if this proposal is interesting I can write a patch.it is not consistent with other \\g* commands. Maybe a new statement \\senv ?  But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fgRegardsPavelThis is my first time sending an email here, so let me know if I doing something wrong.", "msg_date": "Tue, 28 Dec 2021 19:55:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "> út 28. 12. 2021 v 19:51 odesílatel Matheus Alcantara <msalcantara.dev@pm.me> napsal:\n>\n>> Hi pgsql hackers, I was testing the new psql command \\getenv introduced on commit 33d3eeadb2 and from a user perspective, I think that would be nice if the PSQLVAR parameter were optional, therefore when it is only necessary to view the value of the environment variable, the user just run \\getenv, for example:\n>>\n>> \\getenv PATH\n>> /usr/local/sbin:/usr/local/bin:/usr/bin\n>>\n>> And when it is necessary to assign the environment variable in a variable, the user could execute like this:\n>>\n>> \\getenv PATH myvar\n>> \\echo :myvar\n>> /usr/local/sbin:/usr/local/bin:/usr/bin\n>>\n>> For this flexibility the order of parameters would need to be reversed, instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.\n>>\n>> What do you guys think? I'm not a C expert but if this proposal is interesting I can write a patch.\n>\n> it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fg\n\nI think that the basic use case would be just for debugging, instead call \\getenv and them \\echo, we could just use \\getenv. I don't see any other advantages, It would just be to\nwrite fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be inconsistent.\n\n> Regards\n>\n> Pavel\n>\n>> This is my first time sending an email here, so let me know if I doing something wrong.\nút 28. 12. 2021 v 19:51 odesílatel Matheus Alcantara <msalcantara.dev@pm.me> napsal:Hi pgsql hackers, I was testing the new psql command \\getenv introduced on commit 33d3eeadb2 and from a user perspective, I think that would be nice if the PSQLVAR parameter were optional, therefore when it is only necessary to view the value of the environment variable, the user just run \\getenv, for example:\\getenv PATH/usr/local/sbin:/usr/local/bin:/usr/binAnd when it is necessary to assign the environment variable in a variable, the user could execute like this:\\getenv PATH myvar\\echo :myvar/usr/local/sbin:/usr/local/bin:/usr/binFor this flexibility the order of parameters would need to be reversed, instead of \\getenv PSQLVAR ENVVAR would be \\getenv ENVVAR PSQLVAR.What do you guys think? I'm not a C expert but if this proposal is interesting I can write a patch.it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fgI think that the basic use case would be just for debugging,\r\ninstead call \\getenv and them \\echo, we could just use \\getenv. I don't\r\nsee any other advantages, It would just be to write fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be  inconsistent.RegardsPavelThis is my first time sending an email here, so let me know if I doing something wrong.", "msg_date": "Tue, 28 Dec 2021 19:26:32 +0000", "msg_from": "Matheus Alcantara <msalcantara.dev@pm.me>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "Matheus Alcantara <msalcantara.dev@pm.me> writes:\n>> it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fg\n\n> I think that the basic use case would be just for debugging, instead call \\getenv and them \\echo, we could just use \\getenv. I don't see any other advantages, It would just be to\n> write fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be inconsistent.\n\nYou don't even need to do that much. This works fine:\n\npostgres=# \\! echo $PATH\n\nSo I'm not convinced that we need another way to spell that.\n(Admittedly, this probably doesn't work on Windows, but\nI gather that environment variables are less interesting there.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 14:53:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "On Tuesday, December 28th, 2021 at 16:53, Tom Lanetgl@sss.pgh.pa.us wrote:\n\n> Matheus Alcantara msalcantara.dev@pm.me writes:\n>\n>>> it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fg\n>\n>> I think that the basic use case would be just for debugging, instead call \\getenv and them \\echo, we could just use \\getenv. I don't see any other advantages, It would just be to\n>>\n>> write fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be inconsistent.\n>\n> You don't even need to do that much. This works fine:\n>\n> postgres=# \\! echo $PATH\n>\n> So I'm not convinced that we need another way to spell that.\n>\n> (Admittedly, this probably doesn't work on Windows, but\n>\n> I gather that environment variables are less interesting there.)\n>\n> regards, tom lane\n\nI definitely agree with this. We already have other ways to handle it.\n\nThanks for discussion and quick responses.\n\nMatheus Alcantara\nOn Tuesday, December 28th, 2021 at 16:53, Tom Lanetgl@sss.pgh.pa.us wrote:Matheus Alcantara msalcantara.dev@pm.me writes:it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fgI think that the basic use case would be just for debugging, instead call \\getenv and them \\echo, we could just use \\getenv. I don't see any other advantages, It would just be towrite fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be inconsistent.You don't even need to do that much. This works fine:postgres=# \\! echo $PATHSo I'm not convinced that we need another way to spell that.(Admittedly, this probably doesn't work on Windows, butI gather that environment variables are less interesting there.)regards, tom laneI definitely agree with this. We already have other ways to handle it.Thanks for discussion and quick responses.Matheus Alcantara", "msg_date": "Tue, 28 Dec 2021 20:18:57 +0000", "msg_from": "Matheus Alcantara <msalcantara.dev@pm.me>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "\nOn 12/28/21 14:53, Tom Lane wrote:\n> Matheus Alcantara <msalcantara.dev@pm.me> writes:\n>>> it is not consistent with other \\g* commands. Maybe a new statement \\senv ? But what is the use case? You can just press ^z and inside shell write echo $xxx, and then fg\n>> I think that the basic use case would be just for debugging, instead call \\getenv and them \\echo, we could just use \\getenv. I don't see any other advantages, It would just be to\n>> write fewer commands. I think that ^z and then fg is a good alternative, since this behavior would be inconsistent.\n> You don't even need to do that much. This works fine:\n>\n> postgres=# \\! echo $PATH\n>\n> So I'm not convinced that we need another way to spell that.\n> (Admittedly, this probably doesn't work on Windows, but\n> I gather that environment variables are less interesting there.)\n>\n> \t\t\t\n\n\nI haven't tested, but I'm fairly sure\n\n postgres=# \\! echo %PATH%\n\nwould do the trick on Windows.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 28 Dec 2021 15:45:40 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" }, { "msg_contents": "## Pavel Stehule (pavel.stehule@gmail.com):\n\n> it is not consistent with other \\g* commands. Maybe a new statement \\senv\n> ? But what is the use case? You can just press ^z and inside shell write\n> echo $xxx, and then fg\n\nThat does not work: backgrounding psql will put you into your original\nshell, the parent process of psql. Changes in the environment of a\nprocess do not change the environment of the parent.\nUse \\! to start a new shell process from psql, which will inherit psql's\nenvironment.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Tue, 28 Dec 2021 22:38:12 +0100", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Make PSQLVAR on \\getenv opitional" } ]
[ { "msg_contents": "I'm developing a new index access method. Sometimes the planner uses it and\nsometimes it doesn't. I'm trying to debug the process to understand why the\nindex does or doesn't get picked up.\n\nIs there a way to dump all of the query plans that the planner considered,\nalong with information on why they were rejected? EXPLAIN only gives info\non the plan that was actually selected.\n\nI understand that this could generate way too much info for a query with\nmany joins, but that's not what I want it for. I just want to look at some\nqueries with zero or one joins to understand what is going on.\n\nThree examples:\n\n1. I spent two days debugging a problem where the index wasn't getting used\nwhen it should have been. The problem turned out to be that the function\nassociated with the operator wasn't created as IMMUTABLE. Bizarrely, when I\nmade it IMMUTABLE, the index got used and the function didn't get called at\nall!\n\n2. I'm currently trying to debug a problem where neither the function nor\nthe index are getting called. EXPLAIN says \"Result (cost=0.00 ...) One-Time\nFilter: false\". Which function does it consider to be a one-time filter and\nwhy? I need a bit more info to track it down.\n\n3. In one case, my access method costestimate() function was returning an\nunexpected value. I couldn't see that because that plan didn't get selected.\n\nI'm looking for a tool that gives a bit more insight.\n\nI'm developing a new index access method. Sometimes the planner uses it and sometimes it doesn't. I'm trying to debug the process to understand why the index does or doesn't get picked up.Is there a way to dump all of the query plans that the planner considered, along with information on why they were rejected? EXPLAIN only gives info on the plan that was actually selected.I understand that this could generate way too much info for a query with many joins, but that's not what I want it for. I just want to look at some queries with zero or one joins to understand what is going on.Three examples:1. I spent two days debugging a problem where the index wasn't getting used when it should have been. The problem turned out to be that the function associated with the operator wasn't created as IMMUTABLE. Bizarrely, when I made it IMMUTABLE, the index got used and the function didn't get called at all!2. I'm currently trying to debug a problem where neither the function nor the index are getting called. EXPLAIN says \"Result (cost=0.00 ...) One-Time Filter: false\". Which function does it consider to be a one-time filter and why? I need a bit more info to track it down. 3. In one case, my access method costestimate() function was returning an unexpected value. I couldn't see that because that plan didn't get selected.I'm looking for a tool that gives a bit more insight.", "msg_date": "Tue, 28 Dec 2021 18:07:50 -0600", "msg_from": "Chris Cleveland <ccleve+github@dieselpoint.com>", "msg_from_op": true, "msg_subject": "Look at all paths?" }, { "msg_contents": "Chris Cleveland <ccleve+github@dieselpoint.com> writes:\n> I'm developing a new index access method. Sometimes the planner uses it and\n> sometimes it doesn't. I'm trying to debug the process to understand why the\n> index does or doesn't get picked up.\n\n> Is there a way to dump all of the query plans that the planner considered,\n> along with information on why they were rejected? EXPLAIN only gives info\n> on the plan that was actually selected.\n\nWhat you can do is \"set enable_seqscan = off\", then EXPLAIN.\nIf you get an indexscan where before you did not, then you have\na costing problem, ie use of index is estimated as more costly\nthan a seqscan. (This is not necessarily wrong, particularly\nif you make the rookie mistake of testing with a tiny table.)\nIf you still get a seqscan, then the planner doesn't think the\nquery conditions match the index, and you have a different\nproblem to solve.\n\nIf you really want to see all the paths, you could do it with\ngdb --- set a breakpoint at add_path and inspect the structs\nthat get passed to it. I doubt that will give you much\nadditional info for this problem. However, if (as seems\nlikely) it's a costing problem, then you may well end up\nstepping through your amcostestimate function to see where\nit's going off the rails; so learning to gdb the backend\nwill be well worth your time anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Dec 2021 19:18:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Look at all paths?" }, { "msg_contents": "On 12/29/21 5:07 AM, Chris Cleveland wrote:\n> I'm developing a new index access method. Sometimes the planner uses it \n> and sometimes it doesn't. I'm trying to debug the process to understand \n> why the index does or doesn't get picked up.\n> \n> Is there a way to dump all of the query plans that the planner \n> considered, along with information on why they were rejected? EXPLAIN \n> only gives info on the plan that was actually selected.\n\nYou can enable OPTIMIZER_DEBUG option. Also the gdbpg code [1] makes our \nwork much easier, sometimes.\n\n[1] https://github.com/tvondra/gdbpg\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 29 Dec 2021 08:22:51 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Look at all paths?" } ]
[ { "msg_contents": "Hi Hackers,\n\nI am wondering if we have a mechanism to convert WAL records to SQL\nstatements.\n\nI am able to use logical decoders like wal2json or test_decoding for\nconverting WAL to readable format, but I am looking for a way to convert\nWAL to sql statements.\n\nThanks\nRajesh\n\nHi Hackers, I am wondering if we have a mechanism to convert WAL records to SQL statements.I am able to use logical decoders like wal2json or test_decoding for converting WAL to readable format, but I am looking for a way to convert WAL to sql statements.ThanksRajesh", "msg_date": "Wed, 29 Dec 2021 11:48:04 +0530", "msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>", "msg_from_op": true, "msg_subject": "Converting WAL to SQL" }, { "msg_contents": "On 29.12.21 07:18, rajesh singarapu wrote:\n> I am wondering if we have a mechanism to convert WAL records to SQL \n> statements.\n> \n> I am able to use logical decoders like wal2json or test_decoding for \n> converting WAL to readable format, but I am looking for a way to convert \n> WAL to sql statements.\n\nUsing pglogical in SPI mode has such a logic.\n\n\n", "msg_date": "Wed, 29 Dec 2021 11:04:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Wed, 29 Dec 2021 at 03:18 rajesh singarapu <rajesh.rs0541@gmail.com>\nwrote:\n\n> Hi Hackers,\n>\n> I am wondering if we have a mechanism to convert WAL records to SQL\n> statements.\n>\n> I am able to use logical decoders like wal2json or test_decoding for\n> converting WAL to readable format, but I am looking for a way to convert\n> WAL to sql statements.\n>\n>\n>\nTry this:\nhttps://github.com/michaelpq/pg_plugins/tree/main/decoder_raw\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, 29 Dec 2021 at 03:18 rajesh singarapu <rajesh.rs0541@gmail.com> wrote:Hi Hackers, I am wondering if we have a mechanism to convert WAL records to SQL statements.I am able to use logical decoders like wal2json or test_decoding for converting WAL to readable format, but I am looking for a way to convert WAL to sql statements.\nTry this: https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Wed, 29 Dec 2021 08:50:23 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Wed, Dec 29, 2021 at 08:50:23AM -0300, Fabrízio de Royes Mello wrote:\n> Try this:\n> https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw\n\nYou may want to be careful with this, and I don't know if anybody is\nusing that for serious cases so some spots may have been missed.\n--\nMichael", "msg_date": "Tue, 4 Jan 2022 21:22:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Tue, Jan 4, 2022 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 29, 2021 at 08:50:23AM -0300, Fabrízio de Royes Mello wrote:\n> > Try this:\n> > https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw\n>\n> You may want to be careful with this, and I don't know if anybody is\n> using that for serious cases so some spots may have been missed.\n>\n\nI used it in the past during a major upgrade process from 9.2 to 9.6.\n\nWhat we did was decode the 9.6 wal files and apply transactions to the\nold 9.2 to keep it in sync with the new promoted version. This was our\n\"rollback\" strategy if something went wrong with the new 9.6 version.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Tue, Jan 4, 2022 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:>> On Wed, Dec 29, 2021 at 08:50:23AM -0300, Fabrízio de Royes Mello wrote:> > Try this:> > https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw>> You may want to be careful with this, and I don't know if anybody is> using that for serious cases so some spots may have been missed.>I used it in the past during a major upgrade process from 9.2 to 9.6. What we did was decode the 9.6 wal files and apply transactions to theold 9.2 to keep it in sync with the new promoted version. This was our\"rollback\" strategy if something went wrong with the new 9.6 version.Regards,--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 4 Jan 2022 10:47:47 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Tue, Jan 04, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:\n> I used it in the past during a major upgrade process from 9.2 to 9.6.\n> \n> What we did was decode the 9.6 wal files and apply transactions to the\n> old 9.2 to keep it in sync with the new promoted version. This was our\n> \"rollback\" strategy if something went wrong with the new 9.6 version.\n\nOh, cool. Thanks for the feedback.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 11:37:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Tue, Jan 4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:\n> \n> On Tue, Jan 4, 2022 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Dec 29, 2021 at 08:50:23AM -0300, Fabrízio de Royes Mello wrote:\n> > > Try this:\n> > > https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw\n> >\n> > You may want to be careful with this, and I don't know if anybody is\n> > using that for serious cases so some spots may have been missed.\n> >\n> \n> I used it in the past during a major upgrade process from 9.2 to 9.6. \n> \n> What we did was decode the 9.6 wal files and apply transactions to the\n> old 9.2 to keep it in sync with the new promoted version. This was our\n> \"rollback\" strategy if something went wrong with the new 9.6 version.\n\nHow did you deal with the issue that SQL isn't granular enough (vs.\nrow-level changes) to reproduce the result reliably, as outlined here?\n\n\thttps://momjian.us/main/blogs/pgblog/2019.html#March_6_2019\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 11:19:29 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jan 4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:\n> >\n> >\n> > What we did was decode the 9.6 wal files and apply transactions to the\n> > old 9.2 to keep it in sync with the new promoted version. This was our\n> > \"rollback\" strategy if something went wrong with the new 9.6 version.\n>\n> How did you deal with the issue that SQL isn't granular enough (vs.\n> row-level changes) to reproduce the result reliably, as outlined here?\n\nThis is a logical decoding plugin, so it's SQL containing decoded\nrow-level changes. It will behave the same as a\npublication/suscription (apart from being far less performant, due to\nbeing plain SQL of course).\n\n\n", "msg_date": "Thu, 6 Jan 2022 01:19:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "On Wed, Jan 5, 2022 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, Jan 4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:\n> > >\n> > >\n> > > What we did was decode the 9.6 wal files and apply transactions to the\n> > > old 9.2 to keep it in sync with the new promoted version. This was our\n> > > \"rollback\" strategy if something went wrong with the new 9.6 version.\n> >\n> > How did you deal with the issue that SQL isn't granular enough (vs.\n> > row-level changes) to reproduce the result reliably, as outlined here?\n>\n> This is a logical decoding plugin, so it's SQL containing decoded\n> row-level changes. It will behave the same as a\n> publication/suscription (apart from being far less performant, due to\n> being plain SQL of course).\n\nExactly!\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Jan 5, 2022 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:>> On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian <bruce@momjian.us> wrote:> >> > On Tue, Jan  4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:> > >> > >> > > What we did was decode the 9.6 wal files and apply transactions to the> > > old 9.2 to keep it in sync with the new promoted version. This was our> > > \"rollback\" strategy if something went wrong with the new 9.6 version.> >> > How did you deal with the issue that SQL isn't granular enough (vs.> > row-level changes) to reproduce the result reliably, as outlined here?>> This is a logical decoding plugin, so it's SQL containing decoded> row-level changes.  It will behave the same as a> publication/suscription (apart from being far less performant, due to> being plain SQL of course).Exactly!--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Wed, 5 Jan 2022 15:50:52 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Converting WAL to SQL" }, { "msg_contents": "Thanks much for your suggestions,\nI am exploring logical decoding because I have two different platforms and\nversions as well.\nSo my best bet is logical decoding, but I am also wondering if somebody has\ndone replication/migration from windows to linux or vise-a-versa at\nphysical level with some tooling.\n\nthanks\nRajesh\n\n\nOn Thu, Jan 6, 2022 at 12:21 AM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n\n>\n> On Wed, Jan 5, 2022 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Tue, Jan 4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello\n> wrote:\n> > > >\n> > > >\n> > > > What we did was decode the 9.6 wal files and apply transactions to\n> the\n> > > > old 9.2 to keep it in sync with the new promoted version. This was\n> our\n> > > > \"rollback\" strategy if something went wrong with the new 9.6 version.\n> > >\n> > > How did you deal with the issue that SQL isn't granular enough (vs.\n> > > row-level changes) to reproduce the result reliably, as outlined here?\n> >\n> > This is a logical decoding plugin, so it's SQL containing decoded\n> > row-level changes. It will behave the same as a\n> > publication/suscription (apart from being far less performant, due to\n> > being plain SQL of course).\n>\n> Exactly!\n>\n> --\n> Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n> PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n>\n\nThanks much for your suggestions, I am exploring logical decoding because I have two different platforms and versions as well.So my best bet is logical decoding, but I am also wondering if somebody has done replication/migration from windows to linux or vise-a-versa at physical level with some tooling.thanksRajeshOn Thu, Jan 6, 2022 at 12:21 AM Fabrízio de Royes Mello <fabriziomello@gmail.com> wrote:On Wed, Jan 5, 2022 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:>> On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian <bruce@momjian.us> wrote:> >> > On Tue, Jan  4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:> > >> > >> > > What we did was decode the 9.6 wal files and apply transactions to the> > > old 9.2 to keep it in sync with the new promoted version. This was our> > > \"rollback\" strategy if something went wrong with the new 9.6 version.> >> > How did you deal with the issue that SQL isn't granular enough (vs.> > row-level changes) to reproduce the result reliably, as outlined here?>> This is a logical decoding plugin, so it's SQL containing decoded> row-level changes.  It will behave the same as a> publication/suscription (apart from being far less performant, due to> being plain SQL of course).Exactly!--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 11 Jan 2022 00:21:51 +0530", "msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Converting WAL to SQL" } ]
[ { "msg_contents": "Hello,\n\ncurrently, on Windows/MSVC, src\\tools\\msvc\\*.bat files mostly require \nbeing in that src\\tools\\msvc directory first.\n\nI suggest an obvious fix:\n\ndiff --git a/src/tools/msvc/build.bat b/src/tools/msvc/build.bat\nindex 4001ac1d0d1..407b6559cfb 100755\n--- a/src/tools/msvc/build.bat\n+++ b/src/tools/msvc/build.bat\n@@ -3,4 +3,4 @@ REM src/tools/msvc/build.bat\n REM all the logic for this now belongs in build.pl. This file really\n REM only exists so you don't have to type \"perl build.pl\"\n REM Resist any temptation to add any logic here.\n-@perl build.pl %*\n+@perl %~dp0\\build.pl %*\ndiff --git a/src/tools/msvc/install.bat b/src/tools/msvc/install.bat\nindex d03277eff2b..98edf6bdffb 100644\n--- a/src/tools/msvc/install.bat\n+++ b/src/tools/msvc/install.bat\n@@ -3,4 +3,4 @@ REM src/tools/msvc/install.bat\n REM all the logic for this now belongs in install.pl. This file really\n REM only exists so you don't have to type \"perl install.pl\"\n REM Resist any temptation to add any logic here.\n-@perl install.pl %*\n+@perl %~dp0\\install.pl %*\ndiff --git a/src/tools/msvc/vcregress.bat b/src/tools/msvc/vcregress.bat\nindex a981d3a6aa1..0d65c823e13 100644\n--- a/src/tools/msvc/vcregress.bat\n+++ b/src/tools/msvc/vcregress.bat\n@@ -3,4 +3,4 @@ REM src/tools/msvc/vcregress.bat\n REM all the logic for this now belongs in vcregress.pl. This file really\n REM only exists so you don't have to type \"perl vcregress.pl\"\n REM Resist any temptation to add any logic here.\n-@perl vcregress.pl %*\n+@perl %~dp0\\vcregress.pl %*\n\nThis patch uses standard windows cmd's %~dp0 to get the complete path \n(drive, \"d\", and path, \"p\") of the currently executing .bat file to get \nproper path of a .pl file to execute. I find the following link useful \nwhenever I need to remember details on cmd's %-substitution rules: \nhttps://ss64.com/nt/syntax-args.html\n\nWith this change, one can call those .bat files, e.g. \nsrc\\tools\\msvc\\build.bat, without leaving the root of the source tree.\n\nNot sure if similar change should be applied to pgflex.bat and \npgbison.bat -- never used them on Windows and they seem to require being \ncalled from the root, but perhaps they deserve a similar change.\n\nIf accepted, do you think this change is worthy of back-porting?\n\nPlease advise if you think this change is a beneficial one.\n\nP.S. Yes, I am aware of very probable upcoming move to meson, but until \nthen this little patch really helps me whenever I have to deal with \nWindows and MSVC from the command line. Besides, it could help old \nbranches as well.\n\n-- \nAnton Voloshin\n\nhttps://postgrespro.ru\nPostgres Professional, The Russian Postgres Company", "msg_date": "Wed, 29 Dec 2021 17:16:46 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[PATCH] allow src/tools/msvc/*.bat files to be called from the root\n of the source tree" }, { "msg_contents": "\nOn 12/29/21 05:16, Anton Voloshin wrote:\n> Hello,\n>\n> currently, on Windows/MSVC, src\\tools\\msvc\\*.bat files mostly require\n> being in that src\\tools\\msvc directory first.\n>\n> I suggest an obvious fix:\n[...]\n\n> This patch uses standard windows cmd's %~dp0 to get the complete path\n> (drive, \"d\", and path, \"p\") of the currently executing .bat file to\n> get proper path of a .pl file to execute. I find the following link\n> useful whenever I need to remember details on cmd's %-substitution\n> rules: https://ss64.com/nt/syntax-args.html\n>\n> With this change, one can call those .bat files, e.g.\n> src\\tools\\msvc\\build.bat, without leaving the root of the source tree.\n>\n> Not sure if similar change should be applied to pgflex.bat and\n> pgbison.bat -- never used them on Windows and they seem to require\n> being called from the root, but perhaps they deserve a similar change.\n>\n> If accepted, do you think this change is worthy of back-porting?\n>\n> Please advise if you think this change is a beneficial one.\n>\n> P.S. Yes, I am aware of very probable upcoming move to meson, but\n> until then this little patch really helps me whenever I have to deal\n> with Windows and MSVC from the command line. Besides, it could help\n> old branches as well.\n>\n\n\nSeems reasonable. I don't see any reason not to do it for pgbison.bat\nand pgflex.bat, just for the sake of consistency.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Dec 2021 09:48:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] allow src/tools/msvc/*.bat files to be called from the\n root of the source tree" }, { "msg_contents": "On Wed, Dec 29, 2021 at 09:48:14AM -0500, Andrew Dunstan wrote:\n> Seems reasonable. I don't see any reason not to do it for pgbison.bat\n> and pgflex.bat, just for the sake of consistency.\n\nYeah, that would close the loop. Andrew, are you planning to check\nand apply this patch?\n--\nMichael", "msg_date": "Tue, 4 Jan 2022 21:20:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] allow src/tools/msvc/*.bat files to be called from the\n root of the source tree" }, { "msg_contents": "\nOn 1/4/22 07:20, Michael Paquier wrote:\n> On Wed, Dec 29, 2021 at 09:48:14AM -0500, Andrew Dunstan wrote:\n>> Seems reasonable. I don't see any reason not to do it for pgbison.bat\n>> and pgflex.bat, just for the sake of consistency.\n> Yeah, that would close the loop. Andrew, are you planning to check\n> and apply this patch?\n\n\n\nSure, I can do that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 4 Jan 2022 08:37:46 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] allow src/tools/msvc/*.bat files to be called from the\n root of the source tree" }, { "msg_contents": "\nOn 1/4/22 08:37, Andrew Dunstan wrote:\n> On 1/4/22 07:20, Michael Paquier wrote:\n>> On Wed, Dec 29, 2021 at 09:48:14AM -0500, Andrew Dunstan wrote:\n>>> Seems reasonable. I don't see any reason not to do it for pgbison.bat\n>>> and pgflex.bat, just for the sake of consistency.\n>> Yeah, that would close the loop. Andrew, are you planning to check\n>> and apply this patch?\n>\n>\n> Sure, I can do that.\n>\n>\n\ndone\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 Jan 2022 17:06:31 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] allow src/tools/msvc/*.bat files to be called from the\n root of the source tree" } ]
[ { "msg_contents": "Hi,\n\nAt times, some of the checkpoint operations such as removing old WAL\nfiles, dealing with replication snapshot or mapping files etc. may\ntake a while during which the server doesn't emit any logs or\ninformation, the only logs emitted are LogCheckpointStart and\nLogCheckpointEnd. Many times this isn't a problem if the checkpoint is\nquicker, but there can be extreme situations which require the users\nto know what's going on with the current checkpoint.\n\nGiven that the commit 9ce346ea [1] introduced a nice mechanism to\nreport the long running operations of the startup process in the\nserver logs, I'm thinking we can have a similar progress mechanism for\nthe checkpoint as well. There's another idea suggested in a couple of\nother threads to have a pg_stat_progress_checkpoint similar to\npg_stat_progress_analyze/vacuum/etc. But the problem with this idea is\nduring the end-of-recovery or shutdown checkpoints, the\npg_stat_progress_checkpoint view isn't accessible as it requires a\nconnection to the server which isn't allowed.\n\nTherefore, reporting the checkpoint progress in the server logs, much\nlike [1], seems to be the best way IMO. We can 1) either make\nereport_startup_progress and log_startup_progress_interval more\ngeneric (something like ereport_log_progress and\nlog_progress_interval), move the code to elog.c, use it for\ncheckpoint progress and if required for other time-consuming\noperations 2) or have an entirely different GUC and API for checkpoint\nprogress.\n\nIMO, option (1) i.e. ereport_log_progress and log_progress_interval\n(better names are welcome) seems a better idea.\n\nThoughts?\n\n[1]\ncommit 9ce346eabf350a130bba46be3f8c50ba28506969\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Oct 25 11:51:57 2021 -0400\n\n Report progress of startup operations that take a long time.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 29 Dec 2021 20:00:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Report checkpoint progress in server logs" }, { "msg_contents": "On Wed, Dec 29, 2021 at 3:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> At times, some of the checkpoint operations such as removing old WAL\n> files, dealing with replication snapshot or mapping files etc. may\n> take a while during which the server doesn't emit any logs or\n> information, the only logs emitted are LogCheckpointStart and\n> LogCheckpointEnd. Many times this isn't a problem if the checkpoint is\n> quicker, but there can be extreme situations which require the users\n> to know what's going on with the current checkpoint.\n>\n> Given that the commit 9ce346ea [1] introduced a nice mechanism to\n> report the long running operations of the startup process in the\n> server logs, I'm thinking we can have a similar progress mechanism for\n> the checkpoint as well. There's another idea suggested in a couple of\n> other threads to have a pg_stat_progress_checkpoint similar to\n> pg_stat_progress_analyze/vacuum/etc. But the problem with this idea is\n> during the end-of-recovery or shutdown checkpoints, the\n> pg_stat_progress_checkpoint view isn't accessible as it requires a\n> connection to the server which isn't allowed.\n>\n> Therefore, reporting the checkpoint progress in the server logs, much\n> like [1], seems to be the best way IMO. We can 1) either make\n> ereport_startup_progress and log_startup_progress_interval more\n> generic (something like ereport_log_progress and\n> log_progress_interval), move the code to elog.c, use it for\n> checkpoint progress and if required for other time-consuming\n> operations 2) or have an entirely different GUC and API for checkpoint\n> progress.\n>\n> IMO, option (1) i.e. ereport_log_progress and log_progress_interval\n> (better names are welcome) seems a better idea.\n>\n> Thoughts?\n\nI find progress reporting in the logfile to generally be a terrible\nway of doing things, and the fact that we do it for the startup\nprocess is/should be only because we have no other choice, not because\nit's the right choice.\n\nI think the right choice to solve the *general* problem is the\nmentioned pg_stat_progress_checkpoints.\n\nWe may want to *additionally* have the ability to log the progress\nspecifically for the special cases when we're not able to use that\nview. And in those case, we can perhaps just use the existing\nlog_startup_progress_interval parameter for this as well -- at least\nfor the startup checkpoint.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 29 Dec 2021 15:35:40 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n>> Therefore, reporting the checkpoint progress in the server logs, much\n>> like [1], seems to be the best way IMO.\n\n> I find progress reporting in the logfile to generally be a terrible\n> way of doing things, and the fact that we do it for the startup\n> process is/should be only because we have no other choice, not because\n> it's the right choice.\n\nI'm already pretty seriously unhappy about the log-spamming effects of\n64da07c41 (default to log_checkpoints=on), and am willing to lay a side\nbet that that gets reverted after we have some field experience with it.\nThis proposal seems far worse from that standpoint. Keep in mind that\nour out-of-the-box logging configuration still doesn't have any log\nrotation ability, which means that the noisier the server is in normal\noperation, the sooner you fill your disk.\n\n> I think the right choice to solve the *general* problem is the\n> mentioned pg_stat_progress_checkpoints.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 10:40:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "Coincidentally, I was thinking about the same yesterday after tired of\nwaiting for the checkpoint completion on a server.\n\nOn Wed, Dec 29, 2021 at 7:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> >> Therefore, reporting the checkpoint progress in the server logs, much\n> >> like [1], seems to be the best way IMO.\n>\n> > I find progress reporting in the logfile to generally be a terrible\n> > way of doing things, and the fact that we do it for the startup\n> > process is/should be only because we have no other choice, not because\n> > it's the right choice.\n>\n> I'm already pretty seriously unhappy about the log-spamming effects of\n> 64da07c41 (default to log_checkpoints=on), and am willing to lay a side\n> bet that that gets reverted after we have some field experience with it.\n> This proposal seems far worse from that standpoint. Keep in mind that\n> our out-of-the-box logging configuration still doesn't have any log\n> rotation ability, which means that the noisier the server is in normal\n> operation, the sooner you fill your disk.\n>\n\nServer is not open up for the queries while running the end of recovery\ncheckpoint and a catalog view may not help here but the process title\nchange or logging would be helpful in such cases. When the server is\nrunning the recovery, anxious customers ask several times the ETA for\nrecovery completion, and not having visibility into these operations makes\nlife difficult for the DBA/operations.\n\n\n>\n> > I think the right choice to solve the *general* problem is the\n> > mentioned pg_stat_progress_checkpoints.\n>\n> +1\n>\n\n+1 to this. We need at least a trace of the number of buffers to sync\n(num_to_scan) before the checkpoint start, instead of just emitting the\nstats at the end.\n\n\nBharat, it would be good to show the buffers synced counter and the total\nbuffers to sync, checkpointer pid, substep it is running, whether it is on\ntarget for completion, checkpoint_Reason (manual/times/forced). BufferSync\nhas several variables tracking the sync progress locally, and we may need\nsome refactoring here.\n\n\n>\n> regards, tom lane\n>\n>\n>\n\n  Coincidentally, I was thinking about the same yesterday after tired of waiting for the checkpoint completion on a server.  On Wed, Dec 29, 2021 at 7:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n>> Therefore, reporting the checkpoint progress in the server logs, much\n>> like [1], seems to be the best way IMO.\n\n> I find progress reporting in the logfile to generally be a terrible\n> way of doing things, and the fact that we do it for the startup\n> process is/should be only because we have no other choice, not because\n> it's the right choice.\n\nI'm already pretty seriously unhappy about the log-spamming effects of\n64da07c41 (default to log_checkpoints=on), and am willing to lay a side\nbet that that gets reverted after we have some field experience with it.\nThis proposal seems far worse from that standpoint.  Keep in mind that\nour out-of-the-box logging configuration still doesn't have any log\nrotation ability, which means that the noisier the server is in normal\noperation, the sooner you fill your disk.Server is not open up for the queries while running the end of recovery checkpoint and a catalog view may not help here but the process title change or logging would be helpful in such cases. When the server is running the recovery, anxious customers ask several times the ETA for recovery completion, and not having visibility into these operations makes life difficult for the DBA/operations. \n\n> I think the right choice to solve the *general* problem is the\n> mentioned pg_stat_progress_checkpoints.\n\n+1 +1 to this. We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here. \n\n                        regards, tom lane", "msg_date": "Wed, 29 Dec 2021 10:54:06 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "On Wed, Dec 29, 2021 at 10:40:59AM -0500, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> I think the right choice to solve the *general* problem is the\n>> mentioned pg_stat_progress_checkpoints.\n> \n> +1\n\nAgreed. I don't see why this would not work as there are\nPgBackendStatus entries for each auxiliary process.\n--\nMichael", "msg_date": "Tue, 4 Jan 2022 21:11:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "On Wed, Dec 29, 2021 at 10:40:59AM -0500, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> >> Therefore, reporting the checkpoint progress in the server logs, much\n> >> like [1], seems to be the best way IMO.\n> \n> > I find progress reporting in the logfile to generally be a terrible\n> > way of doing things, and the fact that we do it for the startup\n> > process is/should be only because we have no other choice, not because\n> > it's the right choice.\n> \n> I'm already pretty seriously unhappy about the log-spamming effects of\n> 64da07c41 (default to log_checkpoints=on), and am willing to lay a side\n> bet that that gets reverted after we have some field experience with it.\n> This proposal seems far worse from that standpoint. Keep in mind that\n> our out-of-the-box logging configuration still doesn't have any log\n> rotation ability, which means that the noisier the server is in normal\n> operation, the sooner you fill your disk.\n\nI think we are looking at three potential observable behaviors people\nmight care about:\n\n* the current activity/progress of checkpoints\n* the historical reporting of checkpoint completion, mixed in with other\n log messages for later analysis\n* the aggregate behavior of checkpoint operation\n\nI think it is clear that checkpoint progress activity isn't useful for\nthe server logs because that information has little historical value,\nbut does fit for a progress view. As Tom already expressed, we will\nhave to wait to see if non-progress checkpoint information in the logs\nhas sufficient historical value.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 18:42:24 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "> I think the right choice to solve the *general* problem is the\n> mentioned pg_stat_progress_checkpoints.\n>\n> We may want to *additionally* have the ability to log the progress\n> specifically for the special cases when we're not able to use that\n> view. And in those case, we can perhaps just use the existing\n> log_startup_progress_interval parameter for this as well -- at least\n> for the startup checkpoint.\n\n+1\n\n> We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n>\n> Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n\nI agree to provide above mentioned information as part of showing the\nprogress of current checkpoint operation. I am currently looking into\nthe code to know if any other information can be added.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jan 6, 2022 at 5:12 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Dec 29, 2021 at 10:40:59AM -0500, Tom Lane wrote:\n> > Magnus Hagander <magnus@hagander.net> writes:\n> > >> Therefore, reporting the checkpoint progress in the server logs, much\n> > >> like [1], seems to be the best way IMO.\n> >\n> > > I find progress reporting in the logfile to generally be a terrible\n> > > way of doing things, and the fact that we do it for the startup\n> > > process is/should be only because we have no other choice, not because\n> > > it's the right choice.\n> >\n> > I'm already pretty seriously unhappy about the log-spamming effects of\n> > 64da07c41 (default to log_checkpoints=on), and am willing to lay a side\n> > bet that that gets reverted after we have some field experience with it.\n> > This proposal seems far worse from that standpoint. Keep in mind that\n> > our out-of-the-box logging configuration still doesn't have any log\n> > rotation ability, which means that the noisier the server is in normal\n> > operation, the sooner you fill your disk.\n>\n> I think we are looking at three potential observable behaviors people\n> might care about:\n>\n> * the current activity/progress of checkpoints\n> * the historical reporting of checkpoint completion, mixed in with other\n> log messages for later analysis\n> * the aggregate behavior of checkpoint operation\n>\n> I think it is clear that checkpoint progress activity isn't useful for\n> the server logs because that information has little historical value,\n> but does fit for a progress view. As Tom already expressed, we will\n> have to wait to see if non-progress checkpoint information in the logs\n> has sufficient historical value.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n\n\n", "msg_date": "Fri, 21 Jan 2022 11:07:18 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress in server logs" }, { "msg_contents": "On Fri, Jan 21, 2022 at 11:07 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > I think the right choice to solve the *general* problem is the\n> > mentioned pg_stat_progress_checkpoints.\n> >\n> > We may want to *additionally* have the ability to log the progress\n> > specifically for the special cases when we're not able to use that\n> > view. And in those case, we can perhaps just use the existing\n> > log_startup_progress_interval parameter for this as well -- at least\n> > for the startup checkpoint.\n>\n> +1\n>\n> > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> >\n> > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n>\n> I agree to provide above mentioned information as part of showing the\n> progress of current checkpoint operation. I am currently looking into\n> the code to know if any other information can be added.\n\nAs suggested in the other thread by Julien, I'm changing the subject\nof this thread to reflect the discussion.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 28 Jan 2022 12:24:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> >\n> > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n>\n> I agree to provide above mentioned information as part of showing the\n> progress of current checkpoint operation. I am currently looking into\n> the code to know if any other information can be added.\n\nHere is the initial patch to show the progress of checkpoint through\npg_stat_progress_checkpoint view. Please find the attachment.\n\nThe information added to this view are pid - process ID of a\nCHECKPOINTER process, kind - kind of checkpoint indicates the reason\nfor checkpoint (values can be wal, time or force), phase - indicates\nthe current phase of checkpoint operation, total_buffer_writes - total\nnumber of buffers to be written, buffers_processed - number of buffers\nprocessed, buffers_written - number of buffers written,\ntotal_file_syncs - total number of files to be synced, files_synced -\nnumber of files synced.\n\nThere are many operations happen as part of checkpoint. For each of\nthe operation I am updating the phase field of\npg_stat_progress_checkpoint view. The values supported for this field\nare initializing, checkpointing replication slots, checkpointing\nsnapshots, checkpointing logical rewrite mappings, checkpointing CLOG\npages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\ncheckpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\nbuffers, performing sync requests, performing two phase checkpoint,\nrecycling old XLOG files and Finalizing. In case of checkpointing\nbuffers phase, the fields total_buffer_writes, buffers_processed and\nbuffers_written shows the detailed progress of writing buffers. In\ncase of performing sync requests phase, the fields total_file_syncs\nand files_synced shows the detailed progress of syncing files. In\nother phases, only the phase field is getting updated and it is\ndifficult to show the progress because we do not get the total number\nof files count without traversing the directory. It is not worth to\ncalculate that as it affects the performance of the checkpoint. I also\ngave a thought to just mention the number of files processed, but this\nwont give a meaningful progress information (It can be treated as\nstatistics). Hence just updating the phase field in those scenarios.\n\nApart from above fields, I am planning to add few more fields to the\nview in the next patch. That is, process ID of the backend process\nwhich triggered a CHECKPOINT command, checkpoint start location, filed\nto indicate whether it is a checkpoint or restartpoint and elapsed\ntime of the checkpoint operation. Please share your thoughts. I would\nbe happy to add any other information that contributes to showing the\nprogress of checkpoint.\n\nAs per the discussion in this thread, there should be some mechanism\nto show the progress of checkpoint during shutdown and end-of-recovery\ncases as we cannot access pg_stat_progress_checkpoint in those cases.\nI am working on this to use log_startup_progress_interval mechanism to\nlog the progress in the server logs.\n\nKindly review the patch and share your thoughts.\n\n\nOn Fri, Jan 28, 2022 at 12:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 11:07 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > I think the right choice to solve the *general* problem is the\n> > > mentioned pg_stat_progress_checkpoints.\n> > >\n> > > We may want to *additionally* have the ability to log the progress\n> > > specifically for the special cases when we're not able to use that\n> > > view. And in those case, we can perhaps just use the existing\n> > > log_startup_progress_interval parameter for this as well -- at least\n> > > for the startup checkpoint.\n> >\n> > +1\n> >\n> > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > >\n> > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> >\n> > I agree to provide above mentioned information as part of showing the\n> > progress of current checkpoint operation. I am currently looking into\n> > the code to know if any other information can be added.\n>\n> As suggested in the other thread by Julien, I'm changing the subject\n> of this thread to reflect the discussion.\n>\n> Regards,\n> Bharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 12:22:48 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> Apart from above fields, I am planning to add few more fields to the\n> view in the next patch. That is, process ID of the backend process\n> which triggered a CHECKPOINT command, checkpoint start location, filed\n> to indicate whether it is a checkpoint or restartpoint and elapsed\n> time of the checkpoint operation. Please share your thoughts. I would\n> be happy to add any other information that contributes to showing the\n> progress of checkpoint.\n\nThe progress reporting mechanism of postgres uses the\n'st_progress_param' array of 'PgBackendStatus' structure to hold the\ninformation related to the progress. There is a function\n'pgstat_progress_update_param()' which takes 'index' and 'val' as\narguments and updates the 'val' to corresponding 'index' in the\n'st_progress_param' array. This mechanism works fine when all the\nprogress information is of type integer as the data type of\n'st_progress_param' is of type integer. If the progress data is of\ndifferent type than integer, then there is no easy way to do so. In my\nunderstanding, define a new structure with additional fields. Add this\nas part of the 'PgBackendStatus' structure and support the necessary\nfunction to update and fetch the data from this structure. This\nbecomes very ugly as it will not match the existing mechanism of\nprogress reporting. Kindly let me know if there is any better way to\nhandle this. If there are any changes to the existing mechanism to\nmake it generic to support basic data types, I would like to discuss\nthis in the new thread.\n\nOn Thu, Feb 10, 2022 at 12:22 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > >\n> > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> >\n> > I agree to provide above mentioned information as part of showing the\n> > progress of current checkpoint operation. I am currently looking into\n> > the code to know if any other information can be added.\n>\n> Here is the initial patch to show the progress of checkpoint through\n> pg_stat_progress_checkpoint view. Please find the attachment.\n>\n> The information added to this view are pid - process ID of a\n> CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> for checkpoint (values can be wal, time or force), phase - indicates\n> the current phase of checkpoint operation, total_buffer_writes - total\n> number of buffers to be written, buffers_processed - number of buffers\n> processed, buffers_written - number of buffers written,\n> total_file_syncs - total number of files to be synced, files_synced -\n> number of files synced.\n>\n> There are many operations happen as part of checkpoint. For each of\n> the operation I am updating the phase field of\n> pg_stat_progress_checkpoint view. The values supported for this field\n> are initializing, checkpointing replication slots, checkpointing\n> snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> buffers, performing sync requests, performing two phase checkpoint,\n> recycling old XLOG files and Finalizing. In case of checkpointing\n> buffers phase, the fields total_buffer_writes, buffers_processed and\n> buffers_written shows the detailed progress of writing buffers. In\n> case of performing sync requests phase, the fields total_file_syncs\n> and files_synced shows the detailed progress of syncing files. In\n> other phases, only the phase field is getting updated and it is\n> difficult to show the progress because we do not get the total number\n> of files count without traversing the directory. It is not worth to\n> calculate that as it affects the performance of the checkpoint. I also\n> gave a thought to just mention the number of files processed, but this\n> wont give a meaningful progress information (It can be treated as\n> statistics). Hence just updating the phase field in those scenarios.\n>\n> Apart from above fields, I am planning to add few more fields to the\n> view in the next patch. That is, process ID of the backend process\n> which triggered a CHECKPOINT command, checkpoint start location, filed\n> to indicate whether it is a checkpoint or restartpoint and elapsed\n> time of the checkpoint operation. Please share your thoughts. I would\n> be happy to add any other information that contributes to showing the\n> progress of checkpoint.\n>\n> As per the discussion in this thread, there should be some mechanism\n> to show the progress of checkpoint during shutdown and end-of-recovery\n> cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> I am working on this to use log_startup_progress_interval mechanism to\n> log the progress in the server logs.\n>\n> Kindly review the patch and share your thoughts.\n>\n>\n> On Fri, Jan 28, 2022 at 12:24 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 11:07 AM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > > I think the right choice to solve the *general* problem is the\n> > > > mentioned pg_stat_progress_checkpoints.\n> > > >\n> > > > We may want to *additionally* have the ability to log the progress\n> > > > specifically for the special cases when we're not able to use that\n> > > > view. And in those case, we can perhaps just use the existing\n> > > > log_startup_progress_interval parameter for this as well -- at least\n> > > > for the startup checkpoint.\n> > >\n> > > +1\n> > >\n> > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > >\n> > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > >\n> > > I agree to provide above mentioned information as part of showing the\n> > > progress of current checkpoint operation. I am currently looking into\n> > > the code to know if any other information can be added.\n> >\n> > As suggested in the other thread by Julien, I'm changing the subject\n> > of this thread to reflect the discussion.\n> >\n> > Regards,\n> > Bharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Feb 2022 17:45:26 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Tue, 15 Feb 2022 at 13:16, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Apart from above fields, I am planning to add few more fields to the\n> > view in the next patch. That is, process ID of the backend process\n> > which triggered a CHECKPOINT command, checkpoint start location, filed\n> > to indicate whether it is a checkpoint or restartpoint and elapsed\n> > time of the checkpoint operation. Please share your thoughts. I would\n> > be happy to add any other information that contributes to showing the\n> > progress of checkpoint.\n>\n> The progress reporting mechanism of postgres uses the\n> 'st_progress_param' array of 'PgBackendStatus' structure to hold the\n> information related to the progress. There is a function\n> 'pgstat_progress_update_param()' which takes 'index' and 'val' as\n> arguments and updates the 'val' to corresponding 'index' in the\n> 'st_progress_param' array. This mechanism works fine when all the\n> progress information is of type integer as the data type of\n> 'st_progress_param' is of type integer. If the progress data is of\n> different type than integer, then there is no easy way to do so.\n\nProgress parameters are int64, so all of the new 'checkpoint start\nlocation' (lsn = uint64), 'triggering backend PID' (int), 'elapsed\ntime' (store as start time in stat_progress, timestamp fits in 64\nbits) and 'checkpoint or restartpoint?' (boolean) would each fit in a\ncurrent stat_progress parameter. Some processing would be required at\nthe view, but that's not impossible to overcome.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 16 Feb 2022 20:51:20 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Thu, 10 Feb 2022 at 07:53, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > >\n> > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> >\n> > I agree to provide above mentioned information as part of showing the\n> > progress of current checkpoint operation. I am currently looking into\n> > the code to know if any other information can be added.\n>\n> Here is the initial patch to show the progress of checkpoint through\n> pg_stat_progress_checkpoint view. Please find the attachment.\n>\n> The information added to this view are pid - process ID of a\n> CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> for checkpoint (values can be wal, time or force), phase - indicates\n> the current phase of checkpoint operation, total_buffer_writes - total\n> number of buffers to be written, buffers_processed - number of buffers\n> processed, buffers_written - number of buffers written,\n> total_file_syncs - total number of files to be synced, files_synced -\n> number of files synced.\n>\n> There are many operations happen as part of checkpoint. For each of\n> the operation I am updating the phase field of\n> pg_stat_progress_checkpoint view. The values supported for this field\n> are initializing, checkpointing replication slots, checkpointing\n> snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> buffers, performing sync requests, performing two phase checkpoint,\n> recycling old XLOG files and Finalizing. In case of checkpointing\n> buffers phase, the fields total_buffer_writes, buffers_processed and\n> buffers_written shows the detailed progress of writing buffers. In\n> case of performing sync requests phase, the fields total_file_syncs\n> and files_synced shows the detailed progress of syncing files. In\n> other phases, only the phase field is getting updated and it is\n> difficult to show the progress because we do not get the total number\n> of files count without traversing the directory. It is not worth to\n> calculate that as it affects the performance of the checkpoint. I also\n> gave a thought to just mention the number of files processed, but this\n> wont give a meaningful progress information (It can be treated as\n> statistics). Hence just updating the phase field in those scenarios.\n>\n> Apart from above fields, I am planning to add few more fields to the\n> view in the next patch. That is, process ID of the backend process\n> which triggered a CHECKPOINT command, checkpoint start location, filed\n> to indicate whether it is a checkpoint or restartpoint and elapsed\n> time of the checkpoint operation. Please share your thoughts. I would\n> be happy to add any other information that contributes to showing the\n> progress of checkpoint.\n>\n> As per the discussion in this thread, there should be some mechanism\n> to show the progress of checkpoint during shutdown and end-of-recovery\n> cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> I am working on this to use log_startup_progress_interval mechanism to\n> log the progress in the server logs.\n>\n> Kindly review the patch and share your thoughts.\n\nInteresting idea, and overall a nice addition to the\npg_stat_progress_* reporting infrastructure.\n\nCould you add your patch to the current commitfest at\nhttps://commitfest.postgresql.org/37/?\n\nSee below for some comments on the patch:\n\n> xlog.c @ checkpoint_progress_start, checkpoint_progress_update_param, checkpoint_progress_end\n> + /* In bootstrap mode, we don't actually record anything. */\n> + if (IsBootstrapProcessingMode())\n> + return;\n\nWhy do you check against the state of the system?\npgstat_progress_update_* already provides protections against updating\nthe progress tables if the progress infrastructure is not loaded; and\notherwise (in the happy path) the cost of updating the progress fields\nwill be quite a bit higher than normal. Updating stat_progress isn't\nvery expensive (quite cheap, really), so I don't quite get why you\nguard against reporting stats when you expect no other client to be\nlistening.\n\nI think you can simplify this a lot by directly using\npgstat_progress_update_param() instead.\n\n> xlog.c @ checkpoint_progress_start\n> + pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT, InvalidOid);\n> + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_PHASE,\n> + PROGRESS_CHECKPOINT_PHASE_INIT);\n> + if (flags & CHECKPOINT_CAUSE_XLOG)\n> + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> + PROGRESS_CHECKPOINT_KIND_WAL);\n> + else if (flags & CHECKPOINT_CAUSE_TIME)\n> + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> + PROGRESS_CHECKPOINT_KIND_TIME);\n> + [...]\n\nCould you assign the kind of checkpoint to a local variable, and then\nupdate the \"phase\" and \"kind\" parameters at the same time through\npgstat_progress_update_multi_param(2, ...)? See\nBuildRelationExtStatistics in extended_stats.c for an example usage.\nNote that regardless of whether checkpoint_progress_update* will\nremain, the checks done in that function already have been checked in\nthis function as well, so you can use the pgstat_* functions directly.\n\n> monitoring.sgml\n> + <structname>pg_stat_progress_checkpoint</structname> view will contain a\n> + single row indicating the progress of checkpoint operation.\n\n... add \"if a checkpoint is currently active\".\n\n> + <structfield>total_buffer_writes</structfield> <type>bigint</type>\n> + <structfield>total_file_syncs</structfield> <type>bigint</type>\n\nThe other progress tables use [type]_total as column names for counter\ntargets (e.g. backup_total for backup_streamed, heap_blks_total for\nheap_blks_scanned, etc.). I think that `buffers_total` and\n`files_total` would be better column names.\n\n> + The checkpoint operation is requested due to XLOG filling.\n\n+ The checkpoint was started because >max_wal_size< of WAL was written.\n\n> + The checkpoint operation is requested due to timeout.\n\n+ The checkpoint was started due to the expiration of a\n>checkpoint_timeout< interval\n\n> + The checkpoint operation is forced even if no XLOG activity has occurred\n> + since the last one.\n\n+ Some operation forced a checkpoint.\n\n> + <entry><literal>checkpointing CommitTs pages</literal></entry>\n\nCommitTs -> Commit time stamp\n\nThanks for working on this.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 16 Feb 2022 21:02:45 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > The progress reporting mechanism of postgres uses the\n> > 'st_progress_param' array of 'PgBackendStatus' structure to hold the\n> > information related to the progress. There is a function\n> > 'pgstat_progress_update_param()' which takes 'index' and 'val' as\n> > arguments and updates the 'val' to corresponding 'index' in the\n> > 'st_progress_param' array. This mechanism works fine when all the\n> > progress information is of type integer as the data type of\n> > 'st_progress_param' is of type integer. If the progress data is of\n> > different type than integer, then there is no easy way to do so.\n>\n> Progress parameters are int64, so all of the new 'checkpoint start\n> location' (lsn = uint64), 'triggering backend PID' (int), 'elapsed\n> time' (store as start time in stat_progress, timestamp fits in 64\n> bits) and 'checkpoint or restartpoint?' (boolean) would each fit in a\n> current stat_progress parameter. Some processing would be required at\n> the view, but that's not impossible to overcome.\n\nThank you for sharing the information. 'triggering backend PID' (int)\n- can be stored without any problem. 'checkpoint or restartpoint?'\n(boolean) - can be stored as a integer value like\nPROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\nPROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\nstart time in stat_progress, timestamp fits in 64 bits) - As\nTimestamptz is of type int64 internally, so we can store the timestamp\nvalue in the progres parameter and then expose a function like\n'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\nTimestamptz) as argument and then returns string representing the\nelapsed time. This function can be called in the view. Is it\nsafe/advisable to use int64 type here rather than Timestamptz for this\npurpose? 'checkpoint start location' (lsn = uint64) - I feel we\ncannot use progress parameters for this case. As assigning uint64 to\nint64 type would be an issue for larger values and can lead to hidden\nbugs.\n\nThoughts?\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Thu, Feb 17, 2022 at 1:33 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 10 Feb 2022 at 07:53, Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > >\n> > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > >\n> > > I agree to provide above mentioned information as part of showing the\n> > > progress of current checkpoint operation. I am currently looking into\n> > > the code to know if any other information can be added.\n> >\n> > Here is the initial patch to show the progress of checkpoint through\n> > pg_stat_progress_checkpoint view. Please find the attachment.\n> >\n> > The information added to this view are pid - process ID of a\n> > CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> > for checkpoint (values can be wal, time or force), phase - indicates\n> > the current phase of checkpoint operation, total_buffer_writes - total\n> > number of buffers to be written, buffers_processed - number of buffers\n> > processed, buffers_written - number of buffers written,\n> > total_file_syncs - total number of files to be synced, files_synced -\n> > number of files synced.\n> >\n> > There are many operations happen as part of checkpoint. For each of\n> > the operation I am updating the phase field of\n> > pg_stat_progress_checkpoint view. The values supported for this field\n> > are initializing, checkpointing replication slots, checkpointing\n> > snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> > pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> > checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> > buffers, performing sync requests, performing two phase checkpoint,\n> > recycling old XLOG files and Finalizing. In case of checkpointing\n> > buffers phase, the fields total_buffer_writes, buffers_processed and\n> > buffers_written shows the detailed progress of writing buffers. In\n> > case of performing sync requests phase, the fields total_file_syncs\n> > and files_synced shows the detailed progress of syncing files. In\n> > other phases, only the phase field is getting updated and it is\n> > difficult to show the progress because we do not get the total number\n> > of files count without traversing the directory. It is not worth to\n> > calculate that as it affects the performance of the checkpoint. I also\n> > gave a thought to just mention the number of files processed, but this\n> > wont give a meaningful progress information (It can be treated as\n> > statistics). Hence just updating the phase field in those scenarios.\n> >\n> > Apart from above fields, I am planning to add few more fields to the\n> > view in the next patch. That is, process ID of the backend process\n> > which triggered a CHECKPOINT command, checkpoint start location, filed\n> > to indicate whether it is a checkpoint or restartpoint and elapsed\n> > time of the checkpoint operation. Please share your thoughts. I would\n> > be happy to add any other information that contributes to showing the\n> > progress of checkpoint.\n> >\n> > As per the discussion in this thread, there should be some mechanism\n> > to show the progress of checkpoint during shutdown and end-of-recovery\n> > cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> > I am working on this to use log_startup_progress_interval mechanism to\n> > log the progress in the server logs.\n> >\n> > Kindly review the patch and share your thoughts.\n>\n> Interesting idea, and overall a nice addition to the\n> pg_stat_progress_* reporting infrastructure.\n>\n> Could you add your patch to the current commitfest at\n> https://commitfest.postgresql.org/37/?\n>\n> See below for some comments on the patch:\n>\n> > xlog.c @ checkpoint_progress_start, checkpoint_progress_update_param, checkpoint_progress_end\n> > + /* In bootstrap mode, we don't actually record anything. */\n> > + if (IsBootstrapProcessingMode())\n> > + return;\n>\n> Why do you check against the state of the system?\n> pgstat_progress_update_* already provides protections against updating\n> the progress tables if the progress infrastructure is not loaded; and\n> otherwise (in the happy path) the cost of updating the progress fields\n> will be quite a bit higher than normal. Updating stat_progress isn't\n> very expensive (quite cheap, really), so I don't quite get why you\n> guard against reporting stats when you expect no other client to be\n> listening.\n>\n> I think you can simplify this a lot by directly using\n> pgstat_progress_update_param() instead.\n>\n> > xlog.c @ checkpoint_progress_start\n> > + pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT, InvalidOid);\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_PHASE,\n> > + PROGRESS_CHECKPOINT_PHASE_INIT);\n> > + if (flags & CHECKPOINT_CAUSE_XLOG)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_WAL);\n> > + else if (flags & CHECKPOINT_CAUSE_TIME)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_TIME);\n> > + [...]\n>\n> Could you assign the kind of checkpoint to a local variable, and then\n> update the \"phase\" and \"kind\" parameters at the same time through\n> pgstat_progress_update_multi_param(2, ...)? See\n> BuildRelationExtStatistics in extended_stats.c for an example usage.\n> Note that regardless of whether checkpoint_progress_update* will\n> remain, the checks done in that function already have been checked in\n> this function as well, so you can use the pgstat_* functions directly.\n>\n> > monitoring.sgml\n> > + <structname>pg_stat_progress_checkpoint</structname> view will contain a\n> > + single row indicating the progress of checkpoint operation.\n>\n> ... add \"if a checkpoint is currently active\".\n>\n> > + <structfield>total_buffer_writes</structfield> <type>bigint</type>\n> > + <structfield>total_file_syncs</structfield> <type>bigint</type>\n>\n> The other progress tables use [type]_total as column names for counter\n> targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> heap_blks_scanned, etc.). I think that `buffers_total` and\n> `files_total` would be better column names.\n>\n> > + The checkpoint operation is requested due to XLOG filling.\n>\n> + The checkpoint was started because >max_wal_size< of WAL was written.\n>\n> > + The checkpoint operation is requested due to timeout.\n>\n> + The checkpoint was started due to the expiration of a\n> >checkpoint_timeout< interval\n>\n> > + The checkpoint operation is forced even if no XLOG activity has occurred\n> > + since the last one.\n>\n> + Some operation forced a checkpoint.\n>\n> > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n>\n> CommitTs -> Commit time stamp\n>\n> Thanks for working on this.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:26:07 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 17, 2022 at 12:26:07PM +0530, Nitin Jadhav wrote:\n> \n> Thank you for sharing the information. 'triggering backend PID' (int)\n> - can be stored without any problem.\n\nThere can be multiple processes triggering a checkpoint, or at least wanting it\nto happen or happen faster.\n\n> 'checkpoint or restartpoint?'\n\nDo you actually need to store that? Can't it be inferred from\npg_is_in_recovery()?\n\n\n", "msg_date": "Thu, 17 Feb 2022 19:05:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Thu, 17 Feb 2022 at 07:56, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Progress parameters are int64, so all of the new 'checkpoint start\n> > location' (lsn = uint64), 'triggering backend PID' (int), 'elapsed\n> > time' (store as start time in stat_progress, timestamp fits in 64\n> > bits) and 'checkpoint or restartpoint?' (boolean) would each fit in a\n> > current stat_progress parameter. Some processing would be required at\n> > the view, but that's not impossible to overcome.\n>\n> Thank you for sharing the information. 'triggering backend PID' (int)\n> - can be stored without any problem. 'checkpoint or restartpoint?'\n> (boolean) - can be stored as a integer value like\n> PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> start time in stat_progress, timestamp fits in 64 bits) - As\n> Timestamptz is of type int64 internally, so we can store the timestamp\n> value in the progres parameter and then expose a function like\n> 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> Timestamptz) as argument and then returns string representing the\n> elapsed time.\n\nNo need to use a string there; I think exposing the checkpoint start\ntime is good enough. The conversion of int64 to timestamp[tz] can be\ndone in SQL (although I'm not sure that exposing the internal bitwise\nrepresentation of Interval should be exposed to that extent) [0].\nUsers can then extract the duration interval using now() - start_time,\nwhich also allows the user to use their own preferred formatting.\n\n> This function can be called in the view. Is it\n> safe/advisable to use int64 type here rather than Timestamptz for this\n> purpose?\n\nYes, this must be exposed through int64, as the sql-callable\npg_stat_get_progress_info only exposes bigint columns. Any\ntransformation function may return other types (see\npg_indexam_progress_phasename for an example of that).\n\n> 'checkpoint start location' (lsn = uint64) - I feel we\n> cannot use progress parameters for this case. As assigning uint64 to\n> int64 type would be an issue for larger values and can lead to hidden\n> bugs.\n\nNot necessarily - we can (without much trouble) do a bitwise cast from\nuint64 to int64, and then (in SQL) cast it back to a pg_lsn [1]. Not\nvery elegant, but it works quite well.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] Assuming we don't care about the years past 294246 CE (2942467 is\nwhen int64 overflows into negatives), the following works without any\nprecision losses: SELECT\nto_timestamp((stat.my_int64::bigint/1000000)::float8) +\nmake_interval(0, 0, 0, 0, 0, 0, MOD(stat.my_int64, 1000000)::float8 /\n1000000::float8) FROM (SELECT 1::bigint) AS stat(my_int64);\n[1] SELECT '0/0'::pg_lsn + ((CASE WHEN stat.my_int64 < 0 THEN\npow(2::numeric, 64::numeric)::numeric ELSE 0::numeric END) +\nstat.my_int64::numeric) FROM (SELECT -2::bigint /* 0xFFFFFFFF/FFFFFFFE\n*/ AS my_bigint_lsn) AS stat(my_int64);\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:11:09 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > Thank you for sharing the information. 'triggering backend PID' (int)\n> > - can be stored without any problem.\n>\n> There can be multiple processes triggering a checkpoint, or at least wanting it\n> to happen or happen faster.\n\nYes. There can be multiple processes but there will be one checkpoint\noperation at a time. So the backend PID corresponds to the current\ncheckpoint operation. Let me know if I am missing something.\n\n> > 'checkpoint or restartpoint?'\n>\n> Do you actually need to store that? Can't it be inferred from\n> pg_is_in_recovery()?\n\nAFAIK we cannot use pg_is_in_recovery() to predict whether it is a\ncheckpoint or restartpoint because if the system exits from recovery\nmode during restartpoint then any query to pg_stat_progress_checkpoint\nview will return it as a checkpoint which is ideally not correct. Please\ncorrect me if I am wrong.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Feb 17, 2022 at 4:35 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Feb 17, 2022 at 12:26:07PM +0530, Nitin Jadhav wrote:\n> >\n> > Thank you for sharing the information. 'triggering backend PID' (int)\n> > - can be stored without any problem.\n>\n> There can be multiple processes triggering a checkpoint, or at least wanting it\n> to happen or happen faster.\n>\n> > 'checkpoint or restartpoint?'\n>\n> Do you actually need to store that? Can't it be inferred from\n> pg_is_in_recovery()?\n\n\n", "msg_date": "Thu, 17 Feb 2022 22:39:02 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 17, 2022 at 10:39:02PM +0530, Nitin Jadhav wrote:\n> > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > - can be stored without any problem.\n> >\n> > There can be multiple processes triggering a checkpoint, or at least wanting it\n> > to happen or happen faster.\n> \n> Yes. There can be multiple processes but there will be one checkpoint\n> operation at a time. So the backend PID corresponds to the current\n> checkpoint operation. Let me know if I am missing something.\n\nIf there's a checkpoint timed triggered and then someone calls\npg_start_backup() which then wait for the end of the current checkpoint\n(possibly after changing the flags), I think the view should reflect that in\nsome way. Maybe storing an array of (pid, flags) is too much, but at least a\ncounter with the number of processes actively waiting for the end of the\ncheckpoint.\n\n> > > 'checkpoint or restartpoint?'\n> >\n> > Do you actually need to store that? Can't it be inferred from\n> > pg_is_in_recovery()?\n> \n> AFAIK we cannot use pg_is_in_recovery() to predict whether it is a\n> checkpoint or restartpoint because if the system exits from recovery\n> mode during restartpoint then any query to pg_stat_progress_checkpoint\n> view will return it as a checkpoint which is ideally not correct. Please\n> correct me if I am wrong.\n\nRecovery ends with an end-of-recovery checkpoint that has to finish before the\npromotion can happen, so I don't think that a restart can still be in progress\nif pg_is_in_recovery() returns false.\n\n\n", "msg_date": "Fri, 18 Feb 2022 01:27:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> Interesting idea, and overall a nice addition to the\n> pg_stat_progress_* reporting infrastructure.\n>\n> Could you add your patch to the current commitfest at\n> https://commitfest.postgresql.org/37/?\n>\n> See below for some comments on the patch:\n\nThanks you for reviewing.\nI have added it to the commitfest - https://commitfest.postgresql.org/37/3545/\n\n> > xlog.c @ checkpoint_progress_start, checkpoint_progress_update_param, checkpoint_progress_end\n> > + /* In bootstrap mode, we don't actually record anything. */\n> > + if (IsBootstrapProcessingMode())\n> > + return;\n>\n> Why do you check against the state of the system?\n> pgstat_progress_update_* already provides protections against updating\n> the progress tables if the progress infrastructure is not loaded; and\n> otherwise (in the happy path) the cost of updating the progress fields\n> will be quite a bit higher than normal. Updating stat_progress isn't\n> very expensive (quite cheap, really), so I don't quite get why you\n> guard against reporting stats when you expect no other client to be\n> listening.\n\nNice point. I agree that the extra guards(IsBootstrapProcessingMode()\nand (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY)) ==\n0) are not needed as the progress reporting mechanism handles that\ninternally (It only updates when there is an access to the\npg_stat_progress_activity view). I am planning to add the progress of\ncheckpoint during shutdown and end-of-recovery cases in server logs as\nwe don't have access to the view. In this case these guards are\nnecessary. checkpoint_progress_update_param() is a generic function to\nreport progress to the view or server logs. Thoughts?\n\n> I think you can simplify this a lot by directly using\n> pgstat_progress_update_param() instead.\n>\n> > xlog.c @ checkpoint_progress_start\n> > + pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT, InvalidOid);\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_PHASE,\n> > + PROGRESS_CHECKPOINT_PHASE_INIT);\n> > + if (flags & CHECKPOINT_CAUSE_XLOG)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_WAL);\n> > + else if (flags & CHECKPOINT_CAUSE_TIME)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_TIME);\n> > + [...]\n>\n> Could you assign the kind of checkpoint to a local variable, and then\n> update the \"phase\" and \"kind\" parameters at the same time through\n> pgstat_progress_update_multi_param(2, ...)? See\n> BuildRelationExtStatistics in extended_stats.c for an example usage.\n\nI will make use of pgstat_progress_update_multi_param() in the next\npatch to replace multiple calls to checkpoint_progress_update_param().\n\n> Note that regardless of whether checkpoint_progress_update* will\n> remain, the checks done in that function already have been checked in\n> this function as well, so you can use the pgstat_* functions directly.\n\nAs I mentioned before I am planning to add progress reporting in the\nserver logs, checkpoint_progress_update_param() is required and it\nmakes the job easier.\n\n> > monitoring.sgml\n> > + <structname>pg_stat_progress_checkpoint</structname> view will contain a\n> > + single row indicating the progress of checkpoint operation.\n>\n>... add \"if a checkpoint is currently active\".\n\nI feel adding extra words here to indicate \"if a checkpoint is\ncurrently active\" is not necessary as the view description provides\nthat information and also it aligns with the documentation of existing\nprogress views.\n\n> > + <structfield>total_buffer_writes</structfield> <type>bigint</type>\n> > + <structfield>total_file_syncs</structfield> <type>bigint</type>\n>\n> The other progress tables use [type]_total as column names for counter\n> targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> heap_blks_scanned, etc.). I think that `buffers_total` and\n> `files_total` would be better column names.\n\nI agree and I will update this in the next patch.\n\n> > + The checkpoint operation is requested due to XLOG filling.\n>\n> + The checkpoint was started because >max_wal_size< of WAL was written.\n\nHow about this \"The checkpoint is started because max_wal_size is reached\".\n\n> > + The checkpoint operation is requested due to timeout.\n>\n> + The checkpoint was started due to the expiration of a\n> >checkpoint_timeout< interval\n\n\"The checkpoint is started because checkpoint_timeout expired\".\n\n> > + The checkpoint operation is forced even if no XLOG activity has occurred\n> > + since the last one.\n>\n> + Some operation forced a checkpoint.\n\n\"The checkpoint is started because some operation forced a checkpoint\".\n\n> > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n>\n> CommitTs -> Commit time stamp\n\nI will handle this in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n> On Thu, 10 Feb 2022 at 07:53, Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > >\n> > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > >\n> > > I agree to provide above mentioned information as part of showing the\n> > > progress of current checkpoint operation. I am currently looking into\n> > > the code to know if any other information can be added.\n> >\n> > Here is the initial patch to show the progress of checkpoint through\n> > pg_stat_progress_checkpoint view. Please find the attachment.\n> >\n> > The information added to this view are pid - process ID of a\n> > CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> > for checkpoint (values can be wal, time or force), phase - indicates\n> > the current phase of checkpoint operation, total_buffer_writes - total\n> > number of buffers to be written, buffers_processed - number of buffers\n> > processed, buffers_written - number of buffers written,\n> > total_file_syncs - total number of files to be synced, files_synced -\n> > number of files synced.\n> >\n> > There are many operations happen as part of checkpoint. For each of\n> > the operation I am updating the phase field of\n> > pg_stat_progress_checkpoint view. The values supported for this field\n> > are initializing, checkpointing replication slots, checkpointing\n> > snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> > pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> > checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> > buffers, performing sync requests, performing two phase checkpoint,\n> > recycling old XLOG files and Finalizing. In case of checkpointing\n> > buffers phase, the fields total_buffer_writes, buffers_processed and\n> > buffers_written shows the detailed progress of writing buffers. In\n> > case of performing sync requests phase, the fields total_file_syncs\n> > and files_synced shows the detailed progress of syncing files. In\n> > other phases, only the phase field is getting updated and it is\n> > difficult to show the progress because we do not get the total number\n> > of files count without traversing the directory. It is not worth to\n> > calculate that as it affects the performance of the checkpoint. I also\n> > gave a thought to just mention the number of files processed, but this\n> > wont give a meaningful progress information (It can be treated as\n> > statistics). Hence just updating the phase field in those scenarios.\n> >\n> > Apart from above fields, I am planning to add few more fields to the\n> > view in the next patch. That is, process ID of the backend process\n> > which triggered a CHECKPOINT command, checkpoint start location, filed\n> > to indicate whether it is a checkpoint or restartpoint and elapsed\n> > time of the checkpoint operation. Please share your thoughts. I would\n> > be happy to add any other information that contributes to showing the\n> > progress of checkpoint.\n> >\n> > As per the discussion in this thread, there should be some mechanism\n> > to show the progress of checkpoint during shutdown and end-of-recovery\n> > cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> > I am working on this to use log_startup_progress_interval mechanism to\n> > log the progress in the server logs.\n> >\n> > Kindly review the patch and share your thoughts.\n>\n> Interesting idea, and overall a nice addition to the\n> pg_stat_progress_* reporting infrastructure.\n>\n> Could you add your patch to the current commitfest at\n> https://commitfest.postgresql.org/37/?\n>\n> See below for some comments on the patch:\n>\n> > xlog.c @ checkpoint_progress_start, checkpoint_progress_update_param, checkpoint_progress_end\n> > + /* In bootstrap mode, we don't actually record anything. */\n> > + if (IsBootstrapProcessingMode())\n> > + return;\n>\n> Why do you check against the state of the system?\n> pgstat_progress_update_* already provides protections against updating\n> the progress tables if the progress infrastructure is not loaded; and\n> otherwise (in the happy path) the cost of updating the progress fields\n> will be quite a bit higher than normal. Updating stat_progress isn't\n> very expensive (quite cheap, really), so I don't quite get why you\n> guard against reporting stats when you expect no other client to be\n> listening.\n>\n> I think you can simplify this a lot by directly using\n> pgstat_progress_update_param() instead.\n>\n> > xlog.c @ checkpoint_progress_start\n> > + pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT, InvalidOid);\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_PHASE,\n> > + PROGRESS_CHECKPOINT_PHASE_INIT);\n> > + if (flags & CHECKPOINT_CAUSE_XLOG)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_WAL);\n> > + else if (flags & CHECKPOINT_CAUSE_TIME)\n> > + checkpoint_progress_update_param(flags, PROGRESS_CHECKPOINT_KIND,\n> > + PROGRESS_CHECKPOINT_KIND_TIME);\n> > + [...]\n>\n> Could you assign the kind of checkpoint to a local variable, and then\n> update the \"phase\" and \"kind\" parameters at the same time through\n> pgstat_progress_update_multi_param(2, ...)? See\n> BuildRelationExtStatistics in extended_stats.c for an example usage.\n> Note that regardless of whether checkpoint_progress_update* will\n> remain, the checks done in that function already have been checked in\n> this function as well, so you can use the pgstat_* functions directly.\n>\n> > monitoring.sgml\n> > + <structname>pg_stat_progress_checkpoint</structname> view will contain a\n> > + single row indicating the progress of checkpoint operation.\n>\n> ... add \"if a checkpoint is currently active\".\n>\n> > + <structfield>total_buffer_writes</structfield> <type>bigint</type>\n> > + <structfield>total_file_syncs</structfield> <type>bigint</type>\n>\n> The other progress tables use [type]_total as column names for counter\n> targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> heap_blks_scanned, etc.). I think that `buffers_total` and\n> `files_total` would be better column names.\n>\n> > + The checkpoint operation is requested due to XLOG filling.\n>\n> + The checkpoint was started because >max_wal_size< of WAL was written.\n>\n> > + The checkpoint operation is requested due to timeout.\n>\n> + The checkpoint was started due to the expiration of a\n> >checkpoint_timeout< interval\n>\n> > + The checkpoint operation is forced even if no XLOG activity has occurred\n> > + since the last one.\n>\n> + Some operation forced a checkpoint.\n>\n> > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n>\n> CommitTs -> Commit time stamp\n>\n> Thanks for working on this.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:02:49 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > > - can be stored without any problem.\n> > >\n> > > There can be multiple processes triggering a checkpoint, or at least wanting it\n> > > to happen or happen faster.\n> >\n> > Yes. There can be multiple processes but there will be one checkpoint\n> > operation at a time. So the backend PID corresponds to the current\n> > checkpoint operation. Let me know if I am missing something.\n>\n> If there's a checkpoint timed triggered and then someone calls\n> pg_start_backup() which then wait for the end of the current checkpoint\n> (possibly after changing the flags), I think the view should reflect that in\n> some way. Maybe storing an array of (pid, flags) is too much, but at least a\n> counter with the number of processes actively waiting for the end of the\n> checkpoint.\n\nOkay. I feel this can be added as additional field but it will not\nreplace backend_pid field as this represents the pid of the backend\nwhich triggered the current checkpoint. Probably a new field named\n'processes_wiating' or 'events_waiting' can be added for this purpose.\nThoughts?\n\n> > > > 'checkpoint or restartpoint?'\n> > >\n> > > Do you actually need to store that? Can't it be inferred from\n> > > pg_is_in_recovery()?\n> >\n> > AFAIK we cannot use pg_is_in_recovery() to predict whether it is a\n> > checkpoint or restartpoint because if the system exits from recovery\n> > mode during restartpoint then any query to pg_stat_progress_checkpoint\n> > view will return it as a checkpoint which is ideally not correct. Please\n> > correct me if I am wrong.\n>\n> Recovery ends with an end-of-recovery checkpoint that has to finish before the\n> promotion can happen, so I don't think that a restart can still be in progress\n> if pg_is_in_recovery() returns false.\n\nProbably writing of buffers or syncing files may complete before\npg_is_in_recovery() returns false. But there are some cleanup\noperations happen as part of the checkpoint. During this scenario, we\nmay get false value for pg_is_in_recovery(). Please refer following\npiece of code which is present in CreateRestartpoint().\n\nif (!RecoveryInProgress())\n replayTLI = XLogCtl->InsertTimeLineID;\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Feb 17, 2022 at 10:57 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Feb 17, 2022 at 10:39:02PM +0530, Nitin Jadhav wrote:\n> > > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > > - can be stored without any problem.\n> > >\n> > > There can be multiple processes triggering a checkpoint, or at least wanting it\n> > > to happen or happen faster.\n> >\n> > Yes. There can be multiple processes but there will be one checkpoint\n> > operation at a time. So the backend PID corresponds to the current\n> > checkpoint operation. Let me know if I am missing something.\n>\n> If there's a checkpoint timed triggered and then someone calls\n> pg_start_backup() which then wait for the end of the current checkpoint\n> (possibly after changing the flags), I think the view should reflect that in\n> some way. Maybe storing an array of (pid, flags) is too much, but at least a\n> counter with the number of processes actively waiting for the end of the\n> checkpoint.\n>\n> > > > 'checkpoint or restartpoint?'\n> > >\n> > > Do you actually need to store that? Can't it be inferred from\n> > > pg_is_in_recovery()?\n> >\n> > AFAIK we cannot use pg_is_in_recovery() to predict whether it is a\n> > checkpoint or restartpoint because if the system exits from recovery\n> > mode during restartpoint then any query to pg_stat_progress_checkpoint\n> > view will return it as a checkpoint which is ideally not correct. Please\n> > correct me if I am wrong.\n>\n> Recovery ends with an end-of-recovery checkpoint that has to finish before the\n> promotion can happen, so I don't think that a restart can still be in progress\n> if pg_is_in_recovery() returns false.\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:20:26 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 18, 2022 at 12:20:26PM +0530, Nitin Jadhav wrote:\n> >\n> > If there's a checkpoint timed triggered and then someone calls\n> > pg_start_backup() which then wait for the end of the current checkpoint\n> > (possibly after changing the flags), I think the view should reflect that in\n> > some way. Maybe storing an array of (pid, flags) is too much, but at least a\n> > counter with the number of processes actively waiting for the end of the\n> > checkpoint.\n> \n> Okay. I feel this can be added as additional field but it will not\n> replace backend_pid field as this represents the pid of the backend\n> which triggered the current checkpoint.\n\nI don't think that's true. Requesting a checkpoint means telling the\ncheckpointer that it should wake up and start a checkpoint (or restore point)\nif it's not already doing so, so the pid will always be the checkpointer pid.\nThe only exception is a standalone backend, but in that case you won't be able\nto query that view anyway.\n\nAnd also while looking at the patch I see there's the same problem that I\nmentioned in the previous thread, which is that the effective flags can be\nupdated once the checkpoint started, and as-is the view won't reflect that. It\nalso means that you can't simply display one of wal, time or force but a\npossible combination of the flags (including the one not handled in v1).\n\n> Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> added for this purpose.\n\nMaybe num_process_waiting?\n\n> > > > > 'checkpoint or restartpoint?'\n> > > >\n> > > > Do you actually need to store that? Can't it be inferred from\n> > > > pg_is_in_recovery()?\n> > >\n> > > AFAIK we cannot use pg_is_in_recovery() to predict whether it is a\n> > > checkpoint or restartpoint because if the system exits from recovery\n> > > mode during restartpoint then any query to pg_stat_progress_checkpoint\n> > > view will return it as a checkpoint which is ideally not correct. Please\n> > > correct me if I am wrong.\n> >\n> > Recovery ends with an end-of-recovery checkpoint that has to finish before the\n> > promotion can happen, so I don't think that a restart can still be in progress\n> > if pg_is_in_recovery() returns false.\n> \n> Probably writing of buffers or syncing files may complete before\n> pg_is_in_recovery() returns false. But there are some cleanup\n> operations happen as part of the checkpoint. During this scenario, we\n> may get false value for pg_is_in_recovery(). Please refer following\n> piece of code which is present in CreateRestartpoint().\n> \n> if (!RecoveryInProgress())\n> replayTLI = XLogCtl->InsertTimeLineID;\n\nThen maybe we could store the timeline rather then then kind of checkpoint?\nYou should still be able to compute the information while giving a bit more\ninformation for the same memory usage.\n\n\n", "msg_date": "Fri, 18 Feb 2022 15:43:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > Okay. I feel this can be added as additional field but it will not\n> > replace backend_pid field as this represents the pid of the backend\n> > which triggered the current checkpoint.\n>\n> I don't think that's true. Requesting a checkpoint means telling the\n> checkpointer that it should wake up and start a checkpoint (or restore point)\n> if it's not already doing so, so the pid will always be the checkpointer pid.\n> The only exception is a standalone backend, but in that case you won't be able\n> to query that view anyway.\n\nYes. I agree that the checkpoint will always be performed by the\ncheckpointer process. So the pid in the pg_stat_progress_checkpoint\nview will always correspond to the checkpointer pid only. Checkpoints\nget triggered in many scenarios. One of the cases is the CHECKPOINT\ncommand issued explicitly by the backend. In this scenario I would\nlike to know the backend pid which triggered the checkpoint. Hence I\nwould like to add a backend_pid field. So the\npg_stat_progress_checkpoint view contains pid fields as well as\nbackend_pid fields. The backend_pid contains a valid value only during\nthe CHECKPOINT command issued by the backend explicitly, otherwise the\nvalue will be 0. We may have to add an additional field to\n'CheckpointerShmemStruct' to hold the backend pid. The backend\nrequesting the checkpoint will update its pid to this structure.\nKindly let me know if you still feel the backend_pid field is not\nnecessary.\n\n\n> And also while looking at the patch I see there's the same problem that I\n> mentioned in the previous thread, which is that the effective flags can be\n> updated once the checkpoint started, and as-is the view won't reflect that. It\n> also means that you can't simply display one of wal, time or force but a\n> possible combination of the flags (including the one not handled in v1).\n\nIf I understand the above comment properly, it has 2 points. First is\nto display the combination of flags rather than just displaying wal,\ntime or force - The idea behind this is to just let the user know the\nreason for checkpointing. That is, the checkpoint is started because\nmax_wal_size is reached or checkpoint_timeout expired or explicitly\nissued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\nCHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\nhas to be performed. Hence I have not included those in the view. If\nit is really required, I would like to modify the code to include\nother flags and display the combination. Second point is to reflect\nthe updated flags in the view. AFAIK, there is a possibility that the\nflags get updated during the on-going checkpoint but the reason for\ncheckpoint (wal, time or force) will remain same for the current\ncheckpoint. There might be a change in how checkpoint has to be\nperformed if CHECKPOINT_IMMEDIATE flag is set. If we go with\ndisplaying the combination of flags in the view, then probably we may\nhave to reflect this in the view.\n\n> > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > added for this purpose.\n>\n> Maybe num_process_waiting?\n\nI feel 'processes_wiating' aligns more with the naming conventions of\nthe fields of the existing progres views.\n\n> > Probably writing of buffers or syncing files may complete before\n> > pg_is_in_recovery() returns false. But there are some cleanup\n> > operations happen as part of the checkpoint. During this scenario, we\n> > may get false value for pg_is_in_recovery(). Please refer following\n> > piece of code which is present in CreateRestartpoint().\n> >\n> > if (!RecoveryInProgress())\n> > replayTLI = XLogCtl->InsertTimeLineID;\n>\n> Then maybe we could store the timeline rather then then kind of checkpoint?\n> You should still be able to compute the information while giving a bit more\n> information for the same memory usage.\n\nCan you please describe more about how checkpoint/restartpoint can be\nconfirmed using the timeline id.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Feb 18, 2022 at 1:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Feb 18, 2022 at 12:20:26PM +0530, Nitin Jadhav wrote:\n> > >\n> > > If there's a checkpoint timed triggered and then someone calls\n> > > pg_start_backup() which then wait for the end of the current checkpoint\n> > > (possibly after changing the flags), I think the view should reflect that in\n> > > some way. Maybe storing an array of (pid, flags) is too much, but at least a\n> > > counter with the number of processes actively waiting for the end of the\n> > > checkpoint.\n> >\n> > Okay. I feel this can be added as additional field but it will not\n> > replace backend_pid field as this represents the pid of the backend\n> > which triggered the current checkpoint.\n>\n> I don't think that's true. Requesting a checkpoint means telling the\n> checkpointer that it should wake up and start a checkpoint (or restore point)\n> if it's not already doing so, so the pid will always be the checkpointer pid.\n> The only exception is a standalone backend, but in that case you won't be able\n> to query that view anyway.\n>\n> And also while looking at the patch I see there's the same problem that I\n> mentioned in the previous thread, which is that the effective flags can be\n> updated once the checkpoint started, and as-is the view won't reflect that. It\n> also means that you can't simply display one of wal, time or force but a\n> possible combination of the flags (including the one not handled in v1).\n>\n> > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > added for this purpose.\n>\n> Maybe num_process_waiting?\n>\n> > > > > > 'checkpoint or restartpoint?'\n> > > > >\n> > > > > Do you actually need to store that? Can't it be inferred from\n> > > > > pg_is_in_recovery()?\n> > > >\n> > > > AFAIK we cannot use pg_is_in_recovery() to predict whether it is a\n> > > > checkpoint or restartpoint because if the system exits from recovery\n> > > > mode during restartpoint then any query to pg_stat_progress_checkpoint\n> > > > view will return it as a checkpoint which is ideally not correct. Please\n> > > > correct me if I am wrong.\n> > >\n> > > Recovery ends with an end-of-recovery checkpoint that has to finish before the\n> > > promotion can happen, so I don't think that a restart can still be in progress\n> > > if pg_is_in_recovery() returns false.\n> >\n> > Probably writing of buffers or syncing files may complete before\n> > pg_is_in_recovery() returns false. But there are some cleanup\n> > operations happen as part of the checkpoint. During this scenario, we\n> > may get false value for pg_is_in_recovery(). Please refer following\n> > piece of code which is present in CreateRestartpoint().\n> >\n> > if (!RecoveryInProgress())\n> > replayTLI = XLogCtl->InsertTimeLineID;\n>\n> Then maybe we could store the timeline rather then then kind of checkpoint?\n> You should still be able to compute the information while giving a bit more\n> information for the same memory usage.\n\n\n", "msg_date": "Fri, 18 Feb 2022 20:07:05 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 18, 2022 at 08:07:05PM +0530, Nitin Jadhav wrote:\n> \n> The backend_pid contains a valid value only during\n> the CHECKPOINT command issued by the backend explicitly, otherwise the\n> value will be 0. We may have to add an additional field to\n> 'CheckpointerShmemStruct' to hold the backend pid. The backend\n> requesting the checkpoint will update its pid to this structure.\n> Kindly let me know if you still feel the backend_pid field is not\n> necessary.\n\nThere are more scenarios where you can have a baackend requesting a checkpoint\nand waiting for its completion, and there may be more than one backend\nconcerned, so I don't think that storing only one / the first backend pid is\nok.\n\n> > And also while looking at the patch I see there's the same problem that I\n> > mentioned in the previous thread, which is that the effective flags can be\n> > updated once the checkpoint started, and as-is the view won't reflect that. It\n> > also means that you can't simply display one of wal, time or force but a\n> > possible combination of the flags (including the one not handled in v1).\n> \n> If I understand the above comment properly, it has 2 points. First is\n> to display the combination of flags rather than just displaying wal,\n> time or force - The idea behind this is to just let the user know the\n> reason for checkpointing. That is, the checkpoint is started because\n> max_wal_size is reached or checkpoint_timeout expired or explicitly\n> issued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\n> CHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\n> has to be performed. Hence I have not included those in the view. If\n> it is really required, I would like to modify the code to include\n> other flags and display the combination.\n\nI think all the information should be exposed. Only knowing why the current\ncheckpoint has been triggered without any further information seems a bit\nuseless. Think for instance for cases like [1].\n\n> Second point is to reflect\n> the updated flags in the view. AFAIK, there is a possibility that the\n> flags get updated during the on-going checkpoint but the reason for\n> checkpoint (wal, time or force) will remain same for the current\n> checkpoint. There might be a change in how checkpoint has to be\n> performed if CHECKPOINT_IMMEDIATE flag is set. If we go with\n> displaying the combination of flags in the view, then probably we may\n> have to reflect this in the view.\n\nYou can only \"upgrade\" a checkpoint, but not \"downgrade\" it. So if for\ninstance you find both CHECKPOINT_CAUSE_TIME and CHECKPOINT_FORCE (which is\npossible) you can easily know which one was the one that triggered the\ncheckpoint and which one was added later.\n\n> > > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > > added for this purpose.\n> >\n> > Maybe num_process_waiting?\n> \n> I feel 'processes_wiating' aligns more with the naming conventions of\n> the fields of the existing progres views.\n\nThere's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\na strong opinion on it, just make sure to correct the typo.\n\n> > > Probably writing of buffers or syncing files may complete before\n> > > pg_is_in_recovery() returns false. But there are some cleanup\n> > > operations happen as part of the checkpoint. During this scenario, we\n> > > may get false value for pg_is_in_recovery(). Please refer following\n> > > piece of code which is present in CreateRestartpoint().\n> > >\n> > > if (!RecoveryInProgress())\n> > > replayTLI = XLogCtl->InsertTimeLineID;\n> >\n> > Then maybe we could store the timeline rather then then kind of checkpoint?\n> > You should still be able to compute the information while giving a bit more\n> > information for the same memory usage.\n> \n> Can you please describe more about how checkpoint/restartpoint can be\n> confirmed using the timeline id.\n\nIf pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\nrestartpoint if the checkpoint's timeline is different from the current\ntimeline?\n\n[1] https://www.postgresql.org/message-id/1486805889.24568.96.camel%40credativ.de\n\n\n", "msg_date": "Sat, 19 Feb 2022 13:32:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > Thank you for sharing the information. 'triggering backend PID' (int)\n> > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > (boolean) - can be stored as a integer value like\n> > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > start time in stat_progress, timestamp fits in 64 bits) - As\n> > Timestamptz is of type int64 internally, so we can store the timestamp\n> > value in the progres parameter and then expose a function like\n> > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > Timestamptz) as argument and then returns string representing the\n> > elapsed time.\n>\n> No need to use a string there; I think exposing the checkpoint start\n> time is good enough. The conversion of int64 to timestamp[tz] can be\n> done in SQL (although I'm not sure that exposing the internal bitwise\n> representation of Interval should be exposed to that extent) [0].\n> Users can then extract the duration interval using now() - start_time,\n> which also allows the user to use their own preferred formatting.\n\nThe reason for showing the elapsed time rather than exposing the\ntimestamp directly is in case of checkpoint during shutdown and\nend-of-recovery, I am planning to log a message in server logs using\n'log_startup_progress_interval' infrastructure which displays elapsed\ntime. So just to match both of the behaviour I am displaying elapsed\ntime here. I feel that elapsed time gives a quicker feel of the\nprogress. Kindly let me know if you still feel just exposing the\ntimestamp is better than showing the elapsed time.\n\n> > 'checkpoint start location' (lsn = uint64) - I feel we\n> > cannot use progress parameters for this case. As assigning uint64 to\n> > int64 type would be an issue for larger values and can lead to hidden\n> > bugs.\n>\n> Not necessarily - we can (without much trouble) do a bitwise cast from\n> uint64 to int64, and then (in SQL) cast it back to a pg_lsn [1]. Not\n> very elegant, but it works quite well.\n>\n> [1] SELECT '0/0'::pg_lsn + ((CASE WHEN stat.my_int64 < 0 THEN\n> pow(2::numeric, 64::numeric)::numeric ELSE 0::numeric END) +\n> stat.my_int64::numeric) FROM (SELECT -2::bigint /* 0xFFFFFFFF/FFFFFFFE\n> */ AS my_bigint_lsn) AS stat(my_int64);\n\nThanks for sharing. It works. I will include this in the next patch.\nOn Sat, Feb 19, 2022 at 11:02 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Feb 18, 2022 at 08:07:05PM +0530, Nitin Jadhav wrote:\n> >\n> > The backend_pid contains a valid value only during\n> > the CHECKPOINT command issued by the backend explicitly, otherwise the\n> > value will be 0. We may have to add an additional field to\n> > 'CheckpointerShmemStruct' to hold the backend pid. The backend\n> > requesting the checkpoint will update its pid to this structure.\n> > Kindly let me know if you still feel the backend_pid field is not\n> > necessary.\n>\n> There are more scenarios where you can have a baackend requesting a checkpoint\n> and waiting for its completion, and there may be more than one backend\n> concerned, so I don't think that storing only one / the first backend pid is\n> ok.\n>\n> > > And also while looking at the patch I see there's the same problem that I\n> > > mentioned in the previous thread, which is that the effective flags can be\n> > > updated once the checkpoint started, and as-is the view won't reflect that. It\n> > > also means that you can't simply display one of wal, time or force but a\n> > > possible combination of the flags (including the one not handled in v1).\n> >\n> > If I understand the above comment properly, it has 2 points. First is\n> > to display the combination of flags rather than just displaying wal,\n> > time or force - The idea behind this is to just let the user know the\n> > reason for checkpointing. That is, the checkpoint is started because\n> > max_wal_size is reached or checkpoint_timeout expired or explicitly\n> > issued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\n> > CHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\n> > has to be performed. Hence I have not included those in the view. If\n> > it is really required, I would like to modify the code to include\n> > other flags and display the combination.\n>\n> I think all the information should be exposed. Only knowing why the current\n> checkpoint has been triggered without any further information seems a bit\n> useless. Think for instance for cases like [1].\n>\n> > Second point is to reflect\n> > the updated flags in the view. AFAIK, there is a possibility that the\n> > flags get updated during the on-going checkpoint but the reason for\n> > checkpoint (wal, time or force) will remain same for the current\n> > checkpoint. There might be a change in how checkpoint has to be\n> > performed if CHECKPOINT_IMMEDIATE flag is set. If we go with\n> > displaying the combination of flags in the view, then probably we may\n> > have to reflect this in the view.\n>\n> You can only \"upgrade\" a checkpoint, but not \"downgrade\" it. So if for\n> instance you find both CHECKPOINT_CAUSE_TIME and CHECKPOINT_FORCE (which is\n> possible) you can easily know which one was the one that triggered the\n> checkpoint and which one was added later.\n>\n> > > > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > > > added for this purpose.\n> > >\n> > > Maybe num_process_waiting?\n> >\n> > I feel 'processes_wiating' aligns more with the naming conventions of\n> > the fields of the existing progres views.\n>\n> There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> a strong opinion on it, just make sure to correct the typo.\n>\n> > > > Probably writing of buffers or syncing files may complete before\n> > > > pg_is_in_recovery() returns false. But there are some cleanup\n> > > > operations happen as part of the checkpoint. During this scenario, we\n> > > > may get false value for pg_is_in_recovery(). Please refer following\n> > > > piece of code which is present in CreateRestartpoint().\n> > > >\n> > > > if (!RecoveryInProgress())\n> > > > replayTLI = XLogCtl->InsertTimeLineID;\n> > >\n> > > Then maybe we could store the timeline rather then then kind of checkpoint?\n> > > You should still be able to compute the information while giving a bit more\n> > > information for the same memory usage.\n> >\n> > Can you please describe more about how checkpoint/restartpoint can be\n> > confirmed using the timeline id.\n>\n> If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> restartpoint if the checkpoint's timeline is different from the current\n> timeline?\n>\n> [1] https://www.postgresql.org/message-id/1486805889.24568.96.camel%40credativ.de\n\n\n", "msg_date": "Tue, 22 Feb 2022 12:08:33 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "+/* Kinds of checkpoint (as advertised via PROGRESS_CHECKPOINT_KIND) */\n+#define PROGRESS_CHECKPOINT_KIND_WAL 0\n+#define PROGRESS_CHECKPOINT_KIND_TIME 1\n+#define PROGRESS_CHECKPOINT_KIND_FORCE 2\n+#define PROGRESS_CHECKPOINT_KIND_UNKNOWN 3\n\nOn what basis have you classified the above into the various types of\ncheckpoints? AFAIK, the first two types are based on what triggered\nthe checkpoint (whether it was the checkpoint_timeout or maz_wal_size\nsettings) while the third type indicates the force checkpoint that can\nhappen when the checkpoint is triggered for various reasons e.g. .\nduring createb or dropdb etc. This is quite possible that both the\nPROGRESS_CHECKPOINT_KIND_TIME and PROGRESS_CHECKPOINT_KIND_FORCE flags\nare set for the checkpoint because multiple checkpoint requests are\nprocessed at one go, so what type of checkpoint would that be?\n\n+ */\n+ if ((flags & (CHECKPOINT_IS_SHUTDOWN |\nCHECKPOINT_END_OF_RECOVERY)) == 0)\n+ {\n+\npgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT,\nInvalidOid);\n+ checkpoint_progress_update_param(flags,\nPROGRESS_CHECKPOINT_PHASE,\n+\n PROGRESS_CHECKPOINT_PHASE_INIT);\n+ if (flags & CHECKPOINT_CAUSE_XLOG)\n+ checkpoint_progress_update_param(flags,\nPROGRESS_CHECKPOINT_KIND,\n+\n PROGRESS_CHECKPOINT_KIND_WAL);\n+ else if (flags & CHECKPOINT_CAUSE_TIME)\n+ checkpoint_progress_update_param(flags,\nPROGRESS_CHECKPOINT_KIND,\n+\n PROGRESS_CHECKPOINT_KIND_TIME);\n+ else if (flags & CHECKPOINT_FORCE)\n+ checkpoint_progress_update_param(flags,\nPROGRESS_CHECKPOINT_KIND,\n+\n PROGRESS_CHECKPOINT_KIND_FORCE);\n+ else\n+ checkpoint_progress_update_param(flags,\nPROGRESS_CHECKPOINT_KIND,\n+\n PROGRESS_CHECKPOINT_KIND_UNKNOWN);\n+ }\n+}\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Feb 10, 2022 at 12:23 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > >\n> > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> >\n> > I agree to provide above mentioned information as part of showing the\n> > progress of current checkpoint operation. I am currently looking into\n> > the code to know if any other information can be added.\n>\n> Here is the initial patch to show the progress of checkpoint through\n> pg_stat_progress_checkpoint view. Please find the attachment.\n>\n> The information added to this view are pid - process ID of a\n> CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> for checkpoint (values can be wal, time or force), phase - indicates\n> the current phase of checkpoint operation, total_buffer_writes - total\n> number of buffers to be written, buffers_processed - number of buffers\n> processed, buffers_written - number of buffers written,\n> total_file_syncs - total number of files to be synced, files_synced -\n> number of files synced.\n>\n> There are many operations happen as part of checkpoint. For each of\n> the operation I am updating the phase field of\n> pg_stat_progress_checkpoint view. The values supported for this field\n> are initializing, checkpointing replication slots, checkpointing\n> snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> buffers, performing sync requests, performing two phase checkpoint,\n> recycling old XLOG files and Finalizing. In case of checkpointing\n> buffers phase, the fields total_buffer_writes, buffers_processed and\n> buffers_written shows the detailed progress of writing buffers. In\n> case of performing sync requests phase, the fields total_file_syncs\n> and files_synced shows the detailed progress of syncing files. In\n> other phases, only the phase field is getting updated and it is\n> difficult to show the progress because we do not get the total number\n> of files count without traversing the directory. It is not worth to\n> calculate that as it affects the performance of the checkpoint. I also\n> gave a thought to just mention the number of files processed, but this\n> wont give a meaningful progress information (It can be treated as\n> statistics). Hence just updating the phase field in those scenarios.\n>\n> Apart from above fields, I am planning to add few more fields to the\n> view in the next patch. That is, process ID of the backend process\n> which triggered a CHECKPOINT command, checkpoint start location, filed\n> to indicate whether it is a checkpoint or restartpoint and elapsed\n> time of the checkpoint operation. Please share your thoughts. I would\n> be happy to add any other information that contributes to showing the\n> progress of checkpoint.\n>\n> As per the discussion in this thread, there should be some mechanism\n> to show the progress of checkpoint during shutdown and end-of-recovery\n> cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> I am working on this to use log_startup_progress_interval mechanism to\n> log the progress in the server logs.\n>\n> Kindly review the patch and share your thoughts.\n>\n>\n> On Fri, Jan 28, 2022 at 12:24 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 11:07 AM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > > I think the right choice to solve the *general* problem is the\n> > > > mentioned pg_stat_progress_checkpoints.\n> > > >\n> > > > We may want to *additionally* have the ability to log the progress\n> > > > specifically for the special cases when we're not able to use that\n> > > > view. And in those case, we can perhaps just use the existing\n> > > > log_startup_progress_interval parameter for this as well -- at least\n> > > > for the startup checkpoint.\n> > >\n> > > +1\n> > >\n> > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > >\n> > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > >\n> > > I agree to provide above mentioned information as part of showing the\n> > > progress of current checkpoint operation. I am currently looking into\n> > > the code to know if any other information can be added.\n> >\n> > As suggested in the other thread by Julien, I'm changing the subject\n> > of this thread to reflect the discussion.\n> >\n> > Regards,\n> > Bharath Rupireddy.\n\n\n", "msg_date": "Tue, 22 Feb 2022 20:10:02 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Tue, 22 Feb 2022 at 07:39, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > > (boolean) - can be stored as a integer value like\n> > > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > > start time in stat_progress, timestamp fits in 64 bits) - As\n> > > Timestamptz is of type int64 internally, so we can store the timestamp\n> > > value in the progres parameter and then expose a function like\n> > > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > > Timestamptz) as argument and then returns string representing the\n> > > elapsed time.\n> >\n> > No need to use a string there; I think exposing the checkpoint start\n> > time is good enough. The conversion of int64 to timestamp[tz] can be\n> > done in SQL (although I'm not sure that exposing the internal bitwise\n> > representation of Interval should be exposed to that extent) [0].\n> > Users can then extract the duration interval using now() - start_time,\n> > which also allows the user to use their own preferred formatting.\n>\n> The reason for showing the elapsed time rather than exposing the\n> timestamp directly is in case of checkpoint during shutdown and\n> end-of-recovery, I am planning to log a message in server logs using\n> 'log_startup_progress_interval' infrastructure which displays elapsed\n> time. So just to match both of the behaviour I am displaying elapsed\n> time here. I feel that elapsed time gives a quicker feel of the\n> progress. Kindly let me know if you still feel just exposing the\n> timestamp is better than showing the elapsed time.\n\nAt least for pg_stat_progress_checkpoint, storing only a timestamp in\nthe pg_stat storage (instead of repeatedly updating the field as a\nduration) seems to provide much more precise measures of 'time\nelapsed' for other sessions if one step of the checkpoint is taking a\nlong time.\n\nI understand the want to integrate the log-based reporting in the same\nAPI, but I don't think that is necessarily the right approach:\npg_stat_progress_* has low-overhead infrastructure specifically to\nensure that most tasks will not run much slower while reporting, never\nwaiting for locks. Logging, however, needs to take locks (if only to\nprevent concurrent writes to the output file at a kernel level) and\nthus has a not insignificant overhead and thus is not very useful for\nprecise and very frequent statistics updates.\n\nSo, although similar in nature, I don't think it is smart to use the\nexact same infrastructure between pgstat_progress*-based reporting and\nlog-based progress reporting, especially if your logging-based\nprogress reporting is not intended to be a debugging-only\nconfiguration option similar to log_min_messages=DEBUG[1..5].\n\n- Matthias\n\n\n", "msg_date": "Tue, 22 Feb 2022 19:42:54 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> I will make use of pgstat_progress_update_multi_param() in the next\n> patch to replace multiple calls to checkpoint_progress_update_param().\n\nFixed.\n---\n\n> > The other progress tables use [type]_total as column names for counter\n> > targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> > heap_blks_scanned, etc.). I think that `buffers_total` and\n> > `files_total` would be better column names.\n>\n> I agree and I will update this in the next patch.\n\nFixed.\n---\n\n> How about this \"The checkpoint is started because max_wal_size is reached\".\n>\n> \"The checkpoint is started because checkpoint_timeout expired\".\n>\n> \"The checkpoint is started because some operation forced a checkpoint\".\n\nI have used the above description. Kindly let me know if any changes\nare required.\n---\n\n> > > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n> >\n> > CommitTs -> Commit time stamp\n>\n> I will handle this in the next patch.\n\nFixed.\n---\n\n> There are more scenarios where you can have a baackend requesting a checkpoint\n> and waiting for its completion, and there may be more than one backend\n> concerned, so I don't think that storing only one / the first backend pid is\n> ok.\n\nThanks for this information. I am not considering backend_pid.\n---\n\n> I think all the information should be exposed. Only knowing why the current\n> checkpoint has been triggered without any further information seems a bit\n> useless. Think for instance for cases like [1].\n\nI have supported all possible checkpoint kinds. Added\npg_stat_get_progress_checkpoint_kind() to convert the flags (int) to a\nstring representing a combination of flags and also checking for the\nflag update in ImmediateCheckpointRequested() which checks whether\nCHECKPOINT_IMMEDIATE flag is set or not. I did not find any other\ncases where the flags get changed (which changes the current\ncheckpoint behaviour) during the checkpoint. Kindly let me know if I\nam missing something.\n---\n\n> > I feel 'processes_wiating' aligns more with the naming conventions of\n> > the fields of the existing progres views.\n>\n> There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> a strong opinion on it, just make sure to correct the typo.\n\nMore analysis is required to support this. I am planning to take care\nin the next patch.\n---\n\n> If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> restartpoint if the checkpoint's timeline is different from the current\n> timeline?\n\nFixed.\n\nSharing the v2 patch. Kindly have a look and share your comments.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\n\nOn Tue, Feb 22, 2022 at 12:08 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > > (boolean) - can be stored as a integer value like\n> > > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > > start time in stat_progress, timestamp fits in 64 bits) - As\n> > > Timestamptz is of type int64 internally, so we can store the timestamp\n> > > value in the progres parameter and then expose a function like\n> > > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > > Timestamptz) as argument and then returns string representing the\n> > > elapsed time.\n> >\n> > No need to use a string there; I think exposing the checkpoint start\n> > time is good enough. The conversion of int64 to timestamp[tz] can be\n> > done in SQL (although I'm not sure that exposing the internal bitwise\n> > representation of Interval should be exposed to that extent) [0].\n> > Users can then extract the duration interval using now() - start_time,\n> > which also allows the user to use their own preferred formatting.\n>\n> The reason for showing the elapsed time rather than exposing the\n> timestamp directly is in case of checkpoint during shutdown and\n> end-of-recovery, I am planning to log a message in server logs using\n> 'log_startup_progress_interval' infrastructure which displays elapsed\n> time. So just to match both of the behaviour I am displaying elapsed\n> time here. I feel that elapsed time gives a quicker feel of the\n> progress. Kindly let me know if you still feel just exposing the\n> timestamp is better than showing the elapsed time.\n>\n> > > 'checkpoint start location' (lsn = uint64) - I feel we\n> > > cannot use progress parameters for this case. As assigning uint64 to\n> > > int64 type would be an issue for larger values and can lead to hidden\n> > > bugs.\n> >\n> > Not necessarily - we can (without much trouble) do a bitwise cast from\n> > uint64 to int64, and then (in SQL) cast it back to a pg_lsn [1]. Not\n> > very elegant, but it works quite well.\n> >\n> > [1] SELECT '0/0'::pg_lsn + ((CASE WHEN stat.my_int64 < 0 THEN\n> > pow(2::numeric, 64::numeric)::numeric ELSE 0::numeric END) +\n> > stat.my_int64::numeric) FROM (SELECT -2::bigint /* 0xFFFFFFFF/FFFFFFFE\n> > */ AS my_bigint_lsn) AS stat(my_int64);\n>\n> Thanks for sharing. It works. I will include this in the next patch.\n> On Sat, Feb 19, 2022 at 11:02 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Fri, Feb 18, 2022 at 08:07:05PM +0530, Nitin Jadhav wrote:\n> > >\n> > > The backend_pid contains a valid value only during\n> > > the CHECKPOINT command issued by the backend explicitly, otherwise the\n> > > value will be 0. We may have to add an additional field to\n> > > 'CheckpointerShmemStruct' to hold the backend pid. The backend\n> > > requesting the checkpoint will update its pid to this structure.\n> > > Kindly let me know if you still feel the backend_pid field is not\n> > > necessary.\n> >\n> > There are more scenarios where you can have a baackend requesting a checkpoint\n> > and waiting for its completion, and there may be more than one backend\n> > concerned, so I don't think that storing only one / the first backend pid is\n> > ok.\n> >\n> > > > And also while looking at the patch I see there's the same problem that I\n> > > > mentioned in the previous thread, which is that the effective flags can be\n> > > > updated once the checkpoint started, and as-is the view won't reflect that. It\n> > > > also means that you can't simply display one of wal, time or force but a\n> > > > possible combination of the flags (including the one not handled in v1).\n> > >\n> > > If I understand the above comment properly, it has 2 points. First is\n> > > to display the combination of flags rather than just displaying wal,\n> > > time or force - The idea behind this is to just let the user know the\n> > > reason for checkpointing. That is, the checkpoint is started because\n> > > max_wal_size is reached or checkpoint_timeout expired or explicitly\n> > > issued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\n> > > CHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\n> > > has to be performed. Hence I have not included those in the view. If\n> > > it is really required, I would like to modify the code to include\n> > > other flags and display the combination.\n> >\n> > I think all the information should be exposed. Only knowing why the current\n> > checkpoint has been triggered without any further information seems a bit\n> > useless. Think for instance for cases like [1].\n> >\n> > > Second point is to reflect\n> > > the updated flags in the view. AFAIK, there is a possibility that the\n> > > flags get updated during the on-going checkpoint but the reason for\n> > > checkpoint (wal, time or force) will remain same for the current\n> > > checkpoint. There might be a change in how checkpoint has to be\n> > > performed if CHECKPOINT_IMMEDIATE flag is set. If we go with\n> > > displaying the combination of flags in the view, then probably we may\n> > > have to reflect this in the view.\n> >\n> > You can only \"upgrade\" a checkpoint, but not \"downgrade\" it. So if for\n> > instance you find both CHECKPOINT_CAUSE_TIME and CHECKPOINT_FORCE (which is\n> > possible) you can easily know which one was the one that triggered the\n> > checkpoint and which one was added later.\n> >\n> > > > > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > > > > added for this purpose.\n> > > >\n> > > > Maybe num_process_waiting?\n> > >\n> > > I feel 'processes_wiating' aligns more with the naming conventions of\n> > > the fields of the existing progres views.\n> >\n> > There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> > a strong opinion on it, just make sure to correct the typo.\n> >\n> > > > > Probably writing of buffers or syncing files may complete before\n> > > > > pg_is_in_recovery() returns false. But there are some cleanup\n> > > > > operations happen as part of the checkpoint. During this scenario, we\n> > > > > may get false value for pg_is_in_recovery(). Please refer following\n> > > > > piece of code which is present in CreateRestartpoint().\n> > > > >\n> > > > > if (!RecoveryInProgress())\n> > > > > replayTLI = XLogCtl->InsertTimeLineID;\n> > > >\n> > > > Then maybe we could store the timeline rather then then kind of checkpoint?\n> > > > You should still be able to compute the information while giving a bit more\n> > > > information for the same memory usage.\n> > >\n> > > Can you please describe more about how checkpoint/restartpoint can be\n> > > confirmed using the timeline id.\n> >\n> > If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> > restartpoint if the checkpoint's timeline is different from the current\n> > timeline?\n> >\n> > [1] https://www.postgresql.org/message-id/1486805889.24568.96.camel%40credativ.de", "msg_date": "Wed, 23 Feb 2022 18:58:14 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> On what basis have you classified the above into the various types of\n> checkpoints? AFAIK, the first two types are based on what triggered\n> the checkpoint (whether it was the checkpoint_timeout or maz_wal_size\n> settings) while the third type indicates the force checkpoint that can\n> happen when the checkpoint is triggered for various reasons e.g. .\n> during createb or dropdb etc. This is quite possible that both the\n> PROGRESS_CHECKPOINT_KIND_TIME and PROGRESS_CHECKPOINT_KIND_FORCE flags\n> are set for the checkpoint because multiple checkpoint requests are\n> processed at one go, so what type of checkpoint would that be?\n\nMy initial understanding was wrong. In the v2 patch I have supported\nall values for checkpoint kinds and displaying a string in the\npg_stat_progress_checkpoint view which describes all the bits set in\nthe checkpoint flags.\n\nOn Tue, Feb 22, 2022 at 8:10 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> +/* Kinds of checkpoint (as advertised via PROGRESS_CHECKPOINT_KIND) */\n> +#define PROGRESS_CHECKPOINT_KIND_WAL 0\n> +#define PROGRESS_CHECKPOINT_KIND_TIME 1\n> +#define PROGRESS_CHECKPOINT_KIND_FORCE 2\n> +#define PROGRESS_CHECKPOINT_KIND_UNKNOWN 3\n>\n> On what basis have you classified the above into the various types of\n> checkpoints? AFAIK, the first two types are based on what triggered\n> the checkpoint (whether it was the checkpoint_timeout or maz_wal_size\n> settings) while the third type indicates the force checkpoint that can\n> happen when the checkpoint is triggered for various reasons e.g. .\n> during createb or dropdb etc. This is quite possible that both the\n> PROGRESS_CHECKPOINT_KIND_TIME and PROGRESS_CHECKPOINT_KIND_FORCE flags\n> are set for the checkpoint because multiple checkpoint requests are\n> processed at one go, so what type of checkpoint would that be?\n>\n> + */\n> + if ((flags & (CHECKPOINT_IS_SHUTDOWN |\n> CHECKPOINT_END_OF_RECOVERY)) == 0)\n> + {\n> +\n> pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT,\n> InvalidOid);\n> + checkpoint_progress_update_param(flags,\n> PROGRESS_CHECKPOINT_PHASE,\n> +\n> PROGRESS_CHECKPOINT_PHASE_INIT);\n> + if (flags & CHECKPOINT_CAUSE_XLOG)\n> + checkpoint_progress_update_param(flags,\n> PROGRESS_CHECKPOINT_KIND,\n> +\n> PROGRESS_CHECKPOINT_KIND_WAL);\n> + else if (flags & CHECKPOINT_CAUSE_TIME)\n> + checkpoint_progress_update_param(flags,\n> PROGRESS_CHECKPOINT_KIND,\n> +\n> PROGRESS_CHECKPOINT_KIND_TIME);\n> + else if (flags & CHECKPOINT_FORCE)\n> + checkpoint_progress_update_param(flags,\n> PROGRESS_CHECKPOINT_KIND,\n> +\n> PROGRESS_CHECKPOINT_KIND_FORCE);\n> + else\n> + checkpoint_progress_update_param(flags,\n> PROGRESS_CHECKPOINT_KIND,\n> +\n> PROGRESS_CHECKPOINT_KIND_UNKNOWN);\n> + }\n> +}\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n> On Thu, Feb 10, 2022 at 12:23 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > >\n> > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > >\n> > > I agree to provide above mentioned information as part of showing the\n> > > progress of current checkpoint operation. I am currently looking into\n> > > the code to know if any other information can be added.\n> >\n> > Here is the initial patch to show the progress of checkpoint through\n> > pg_stat_progress_checkpoint view. Please find the attachment.\n> >\n> > The information added to this view are pid - process ID of a\n> > CHECKPOINTER process, kind - kind of checkpoint indicates the reason\n> > for checkpoint (values can be wal, time or force), phase - indicates\n> > the current phase of checkpoint operation, total_buffer_writes - total\n> > number of buffers to be written, buffers_processed - number of buffers\n> > processed, buffers_written - number of buffers written,\n> > total_file_syncs - total number of files to be synced, files_synced -\n> > number of files synced.\n> >\n> > There are many operations happen as part of checkpoint. For each of\n> > the operation I am updating the phase field of\n> > pg_stat_progress_checkpoint view. The values supported for this field\n> > are initializing, checkpointing replication slots, checkpointing\n> > snapshots, checkpointing logical rewrite mappings, checkpointing CLOG\n> > pages, checkpointing CommitTs pages, checkpointing SUBTRANS pages,\n> > checkpointing MULTIXACT pages, checkpointing SLRU pages, checkpointing\n> > buffers, performing sync requests, performing two phase checkpoint,\n> > recycling old XLOG files and Finalizing. In case of checkpointing\n> > buffers phase, the fields total_buffer_writes, buffers_processed and\n> > buffers_written shows the detailed progress of writing buffers. In\n> > case of performing sync requests phase, the fields total_file_syncs\n> > and files_synced shows the detailed progress of syncing files. In\n> > other phases, only the phase field is getting updated and it is\n> > difficult to show the progress because we do not get the total number\n> > of files count without traversing the directory. It is not worth to\n> > calculate that as it affects the performance of the checkpoint. I also\n> > gave a thought to just mention the number of files processed, but this\n> > wont give a meaningful progress information (It can be treated as\n> > statistics). Hence just updating the phase field in those scenarios.\n> >\n> > Apart from above fields, I am planning to add few more fields to the\n> > view in the next patch. That is, process ID of the backend process\n> > which triggered a CHECKPOINT command, checkpoint start location, filed\n> > to indicate whether it is a checkpoint or restartpoint and elapsed\n> > time of the checkpoint operation. Please share your thoughts. I would\n> > be happy to add any other information that contributes to showing the\n> > progress of checkpoint.\n> >\n> > As per the discussion in this thread, there should be some mechanism\n> > to show the progress of checkpoint during shutdown and end-of-recovery\n> > cases as we cannot access pg_stat_progress_checkpoint in those cases.\n> > I am working on this to use log_startup_progress_interval mechanism to\n> > log the progress in the server logs.\n> >\n> > Kindly review the patch and share your thoughts.\n> >\n> >\n> > On Fri, Jan 28, 2022 at 12:24 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 21, 2022 at 11:07 AM Nitin Jadhav\n> > > <nitinjadhavpostgres@gmail.com> wrote:\n> > > >\n> > > > > I think the right choice to solve the *general* problem is the\n> > > > > mentioned pg_stat_progress_checkpoints.\n> > > > >\n> > > > > We may want to *additionally* have the ability to log the progress\n> > > > > specifically for the special cases when we're not able to use that\n> > > > > view. And in those case, we can perhaps just use the existing\n> > > > > log_startup_progress_interval parameter for this as well -- at least\n> > > > > for the startup checkpoint.\n> > > >\n> > > > +1\n> > > >\n> > > > > We need at least a trace of the number of buffers to sync (num_to_scan) before the checkpoint start, instead of just emitting the stats at the end.\n> > > > >\n> > > > > Bharat, it would be good to show the buffers synced counter and the total buffers to sync, checkpointer pid, substep it is running, whether it is on target for completion, checkpoint_Reason\n> > > > > (manual/times/forced). BufferSync has several variables tracking the sync progress locally, and we may need some refactoring here.\n> > > >\n> > > > I agree to provide above mentioned information as part of showing the\n> > > > progress of current checkpoint operation. I am currently looking into\n> > > > the code to know if any other information can be added.\n> > >\n> > > As suggested in the other thread by Julien, I'm changing the subject\n> > > of this thread to reflect the discussion.\n> > >\n> > > Regards,\n> > > Bharath Rupireddy.\n\n\n", "msg_date": "Wed, 23 Feb 2022 19:07:01 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> At least for pg_stat_progress_checkpoint, storing only a timestamp in\n> the pg_stat storage (instead of repeatedly updating the field as a\n> duration) seems to provide much more precise measures of 'time\n> elapsed' for other sessions if one step of the checkpoint is taking a\n> long time.\n\nI am storing the checkpoint start timestamp in the st_progress_param[]\nand this gets set only once during the checkpoint (at the start of the\ncheckpoint). I have added function\npg_stat_get_progress_checkpoint_elapsed() which calculates the elapsed\ntime and returns a string. This function gets called whenever\npg_stat_progress_checkpoint view is queried. Kindly refer v2 patch and\nshare your thoughts.\n\n> I understand the want to integrate the log-based reporting in the same\n> API, but I don't think that is necessarily the right approach:\n> pg_stat_progress_* has low-overhead infrastructure specifically to\n> ensure that most tasks will not run much slower while reporting, never\n> waiting for locks. Logging, however, needs to take locks (if only to\n> prevent concurrent writes to the output file at a kernel level) and\n> thus has a not insignificant overhead and thus is not very useful for\n> precise and very frequent statistics updates.\n\nI understand that the log based reporting is very costly and very\nfrequent updates are not advisable. I am planning to use the existing\ninfrastructure of 'log_startup_progress_interval' which provides an\noption for the user to configure the interval between each progress\nupdate. Hence it avoids frequent updates to server logs. This approach\nis used only during shutdown and end-of-recovery cases because we\ncannot access pg_stat_progress_checkpoint view during those scenarios.\n\n> So, although similar in nature, I don't think it is smart to use the\n> exact same infrastructure between pgstat_progress*-based reporting and\n> log-based progress reporting, especially if your logging-based\n> progress reporting is not intended to be a debugging-only\n> configuration option similar to log_min_messages=DEBUG[1..5].\n\nYes. I agree that we cannot use the same infrastructure for both.\nProgress views and servers logs have different APIs to report the\nprogress information. But since both of this are required for the same\npurpose, I am planning to use a common function which increases the\ncode readability than calling it separately in all the scenarios. I am\nplanning to include log based reporting in the next patch. Even after\nthat if using the same function is not recommended, I am happy to\nchange.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Feb 23, 2022 at 12:13 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 22 Feb 2022 at 07:39, Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > > > (boolean) - can be stored as a integer value like\n> > > > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > > > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > > > start time in stat_progress, timestamp fits in 64 bits) - As\n> > > > Timestamptz is of type int64 internally, so we can store the timestamp\n> > > > value in the progres parameter and then expose a function like\n> > > > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > > > Timestamptz) as argument and then returns string representing the\n> > > > elapsed time.\n> > >\n> > > No need to use a string there; I think exposing the checkpoint start\n> > > time is good enough. The conversion of int64 to timestamp[tz] can be\n> > > done in SQL (although I'm not sure that exposing the internal bitwise\n> > > representation of Interval should be exposed to that extent) [0].\n> > > Users can then extract the duration interval using now() - start_time,\n> > > which also allows the user to use their own preferred formatting.\n> >\n> > The reason for showing the elapsed time rather than exposing the\n> > timestamp directly is in case of checkpoint during shutdown and\n> > end-of-recovery, I am planning to log a message in server logs using\n> > 'log_startup_progress_interval' infrastructure which displays elapsed\n> > time. So just to match both of the behaviour I am displaying elapsed\n> > time here. I feel that elapsed time gives a quicker feel of the\n> > progress. Kindly let me know if you still feel just exposing the\n> > timestamp is better than showing the elapsed time.\n>\n> At least for pg_stat_progress_checkpoint, storing only a timestamp in\n> the pg_stat storage (instead of repeatedly updating the field as a\n> duration) seems to provide much more precise measures of 'time\n> elapsed' for other sessions if one step of the checkpoint is taking a\n> long time.\n>\n> I understand the want to integrate the log-based reporting in the same\n> API, but I don't think that is necessarily the right approach:\n> pg_stat_progress_* has low-overhead infrastructure specifically to\n> ensure that most tasks will not run much slower while reporting, never\n> waiting for locks. Logging, however, needs to take locks (if only to\n> prevent concurrent writes to the output file at a kernel level) and\n> thus has a not insignificant overhead and thus is not very useful for\n> precise and very frequent statistics updates.\n>\n> So, although similar in nature, I don't think it is smart to use the\n> exact same infrastructure between pgstat_progress*-based reporting and\n> log-based progress reporting, especially if your logging-based\n> progress reporting is not intended to be a debugging-only\n> configuration option similar to log_min_messages=DEBUG[1..5].\n>\n> - Matthias\n\n\n", "msg_date": "Wed, 23 Feb 2022 19:53:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "+ if ((ckpt_flags &\n+ (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY)) == 0)\n+ {\n\nThis code (present at multiple places) looks a little ugly to me, what\nwe can do instead is add a macro probably named IsShutdownCheckpoint()\nwhich does the above check and use it in all the functions that have\nthis check. See below:\n\n#define IsShutdownCheckpoint(flags) \\\n (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY) != 0)\n\nAnd then you may use this macro like:\n\nif (IsBootstrapProcessingMode() || IsShutdownCheckpoint(flags))\n return;\n\nThis change can be done in all these functions:\n\n+void\n+checkpoint_progress_start(int flags)\n\n--\n\n+ */\n+void\n+checkpoint_progress_update_param(int index, int64 val)\n\n--\n\n+ * Stop reporting progress of the checkpoint.\n+ */\n+void\n+checkpoint_progress_end(void)\n\n==\n\n+\npgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT,\nInvalidOid);\n+\n+ val[0] = XLogCtl->InsertTimeLineID;\n+ val[1] = flags;\n+ val[2] = PROGRESS_CHECKPOINT_PHASE_INIT;\n+ val[3] = CheckpointStats.ckpt_start_t;\n+\n+ pgstat_progress_update_multi_param(4, index, val);\n+ }\n\nAny specific reason for recording the timelineID in checkpoint stats\ntable? Will this ever change in our case?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Wed, Feb 23, 2022 at 6:59 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > I will make use of pgstat_progress_update_multi_param() in the next\n> > patch to replace multiple calls to checkpoint_progress_update_param().\n>\n> Fixed.\n> ---\n>\n> > > The other progress tables use [type]_total as column names for counter\n> > > targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> > > heap_blks_scanned, etc.). I think that `buffers_total` and\n> > > `files_total` would be better column names.\n> >\n> > I agree and I will update this in the next patch.\n>\n> Fixed.\n> ---\n>\n> > How about this \"The checkpoint is started because max_wal_size is reached\".\n> >\n> > \"The checkpoint is started because checkpoint_timeout expired\".\n> >\n> > \"The checkpoint is started because some operation forced a checkpoint\".\n>\n> I have used the above description. Kindly let me know if any changes\n> are required.\n> ---\n>\n> > > > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n> > >\n> > > CommitTs -> Commit time stamp\n> >\n> > I will handle this in the next patch.\n>\n> Fixed.\n> ---\n>\n> > There are more scenarios where you can have a baackend requesting a checkpoint\n> > and waiting for its completion, and there may be more than one backend\n> > concerned, so I don't think that storing only one / the first backend pid is\n> > ok.\n>\n> Thanks for this information. I am not considering backend_pid.\n> ---\n>\n> > I think all the information should be exposed. Only knowing why the current\n> > checkpoint has been triggered without any further information seems a bit\n> > useless. Think for instance for cases like [1].\n>\n> I have supported all possible checkpoint kinds. Added\n> pg_stat_get_progress_checkpoint_kind() to convert the flags (int) to a\n> string representing a combination of flags and also checking for the\n> flag update in ImmediateCheckpointRequested() which checks whether\n> CHECKPOINT_IMMEDIATE flag is set or not. I did not find any other\n> cases where the flags get changed (which changes the current\n> checkpoint behaviour) during the checkpoint. Kindly let me know if I\n> am missing something.\n> ---\n>\n> > > I feel 'processes_wiating' aligns more with the naming conventions of\n> > > the fields of the existing progres views.\n> >\n> > There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> > a strong opinion on it, just make sure to correct the typo.\n>\n> More analysis is required to support this. I am planning to take care\n> in the next patch.\n> ---\n>\n> > If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> > restartpoint if the checkpoint's timeline is different from the current\n> > timeline?\n>\n> Fixed.\n>\n> Sharing the v2 patch. Kindly have a look and share your comments.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n>\n>\n> On Tue, Feb 22, 2022 at 12:08 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > > > (boolean) - can be stored as a integer value like\n> > > > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > > > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > > > start time in stat_progress, timestamp fits in 64 bits) - As\n> > > > Timestamptz is of type int64 internally, so we can store the timestamp\n> > > > value in the progres parameter and then expose a function like\n> > > > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > > > Timestamptz) as argument and then returns string representing the\n> > > > elapsed time.\n> > >\n> > > No need to use a string there; I think exposing the checkpoint start\n> > > time is good enough. The conversion of int64 to timestamp[tz] can be\n> > > done in SQL (although I'm not sure that exposing the internal bitwise\n> > > representation of Interval should be exposed to that extent) [0].\n> > > Users can then extract the duration interval using now() - start_time,\n> > > which also allows the user to use their own preferred formatting.\n> >\n> > The reason for showing the elapsed time rather than exposing the\n> > timestamp directly is in case of checkpoint during shutdown and\n> > end-of-recovery, I am planning to log a message in server logs using\n> > 'log_startup_progress_interval' infrastructure which displays elapsed\n> > time. So just to match both of the behaviour I am displaying elapsed\n> > time here. I feel that elapsed time gives a quicker feel of the\n> > progress. Kindly let me know if you still feel just exposing the\n> > timestamp is better than showing the elapsed time.\n> >\n> > > > 'checkpoint start location' (lsn = uint64) - I feel we\n> > > > cannot use progress parameters for this case. As assigning uint64 to\n> > > > int64 type would be an issue for larger values and can lead to hidden\n> > > > bugs.\n> > >\n> > > Not necessarily - we can (without much trouble) do a bitwise cast from\n> > > uint64 to int64, and then (in SQL) cast it back to a pg_lsn [1]. Not\n> > > very elegant, but it works quite well.\n> > >\n> > > [1] SELECT '0/0'::pg_lsn + ((CASE WHEN stat.my_int64 < 0 THEN\n> > > pow(2::numeric, 64::numeric)::numeric ELSE 0::numeric END) +\n> > > stat.my_int64::numeric) FROM (SELECT -2::bigint /* 0xFFFFFFFF/FFFFFFFE\n> > > */ AS my_bigint_lsn) AS stat(my_int64);\n> >\n> > Thanks for sharing. It works. I will include this in the next patch.\n> > On Sat, Feb 19, 2022 at 11:02 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On Fri, Feb 18, 2022 at 08:07:05PM +0530, Nitin Jadhav wrote:\n> > > >\n> > > > The backend_pid contains a valid value only during\n> > > > the CHECKPOINT command issued by the backend explicitly, otherwise the\n> > > > value will be 0. We may have to add an additional field to\n> > > > 'CheckpointerShmemStruct' to hold the backend pid. The backend\n> > > > requesting the checkpoint will update its pid to this structure.\n> > > > Kindly let me know if you still feel the backend_pid field is not\n> > > > necessary.\n> > >\n> > > There are more scenarios where you can have a baackend requesting a checkpoint\n> > > and waiting for its completion, and there may be more than one backend\n> > > concerned, so I don't think that storing only one / the first backend pid is\n> > > ok.\n> > >\n> > > > > And also while looking at the patch I see there's the same problem that I\n> > > > > mentioned in the previous thread, which is that the effective flags can be\n> > > > > updated once the checkpoint started, and as-is the view won't reflect that. It\n> > > > > also means that you can't simply display one of wal, time or force but a\n> > > > > possible combination of the flags (including the one not handled in v1).\n> > > >\n> > > > If I understand the above comment properly, it has 2 points. First is\n> > > > to display the combination of flags rather than just displaying wal,\n> > > > time or force - The idea behind this is to just let the user know the\n> > > > reason for checkpointing. That is, the checkpoint is started because\n> > > > max_wal_size is reached or checkpoint_timeout expired or explicitly\n> > > > issued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\n> > > > CHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\n> > > > has to be performed. Hence I have not included those in the view. If\n> > > > it is really required, I would like to modify the code to include\n> > > > other flags and display the combination.\n> > >\n> > > I think all the information should be exposed. Only knowing why the current\n> > > checkpoint has been triggered without any further information seems a bit\n> > > useless. Think for instance for cases like [1].\n> > >\n> > > > Second point is to reflect\n> > > > the updated flags in the view. AFAIK, there is a possibility that the\n> > > > flags get updated during the on-going checkpoint but the reason for\n> > > > checkpoint (wal, time or force) will remain same for the current\n> > > > checkpoint. There might be a change in how checkpoint has to be\n> > > > performed if CHECKPOINT_IMMEDIATE flag is set. If we go with\n> > > > displaying the combination of flags in the view, then probably we may\n> > > > have to reflect this in the view.\n> > >\n> > > You can only \"upgrade\" a checkpoint, but not \"downgrade\" it. So if for\n> > > instance you find both CHECKPOINT_CAUSE_TIME and CHECKPOINT_FORCE (which is\n> > > possible) you can easily know which one was the one that triggered the\n> > > checkpoint and which one was added later.\n> > >\n> > > > > > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > > > > > added for this purpose.\n> > > > >\n> > > > > Maybe num_process_waiting?\n> > > >\n> > > > I feel 'processes_wiating' aligns more with the naming conventions of\n> > > > the fields of the existing progres views.\n> > >\n> > > There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> > > a strong opinion on it, just make sure to correct the typo.\n> > >\n> > > > > > Probably writing of buffers or syncing files may complete before\n> > > > > > pg_is_in_recovery() returns false. But there are some cleanup\n> > > > > > operations happen as part of the checkpoint. During this scenario, we\n> > > > > > may get false value for pg_is_in_recovery(). Please refer following\n> > > > > > piece of code which is present in CreateRestartpoint().\n> > > > > >\n> > > > > > if (!RecoveryInProgress())\n> > > > > > replayTLI = XLogCtl->InsertTimeLineID;\n> > > > >\n> > > > > Then maybe we could store the timeline rather then then kind of checkpoint?\n> > > > > You should still be able to compute the information while giving a bit more\n> > > > > information for the same memory usage.\n> > > >\n> > > > Can you please describe more about how checkpoint/restartpoint can be\n> > > > confirmed using the timeline id.\n> > >\n> > > If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> > > restartpoint if the checkpoint's timeline is different from the current\n> > > timeline?\n> > >\n> > > [1] https://www.postgresql.org/message-id/1486805889.24568.96.camel%40credativ.de\n\n\n", "msg_date": "Wed, 23 Feb 2022 21:46:28 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "I think the change to ImmediateCheckpointRequested() makes no sense.\nBefore this patch, that function merely inquires whether there's an\nimmediate checkpoint queued. After this patch, it ... changes a\nprogress-reporting flag? I think it would make more sense to make the\nprogress-report flag change in whatever is the place that *requests* an\nimmediate checkpoint rather than here.\n\nI think the use of capitals in CHECKPOINT and CHECKPOINTER in the\ndocumentation is excessive. (Same for terms such as MULTIXACT and\nothers in those docs; we typically use those in lowercase when\nuser-facing; and do we really use term CLOG anymore? Don't we call it\n\"commit log\" nowadays?)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Wed, 23 Feb 2022 15:39:04 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "+ Whenever the checkpoint operation is running, the\n+ <structname>pg_stat_progress_checkpoint</structname> view will contain a\n+ single row indicating the progress of the checkpoint. The tables below\n\nMaybe it should show a single row , unless the checkpointer isn't running at\nall (like in single user mode).\n\n+ Process ID of a CHECKPOINTER process.\n\nIt's *the* checkpointer process.\n\npgstatfuncs.c has a whitespace issue (tab-space).\n\nI suppose the functions should set provolatile.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 23 Feb 2022 13:22:10 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, 23 Feb 2022 at 15:24, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > At least for pg_stat_progress_checkpoint, storing only a timestamp in\n> > the pg_stat storage (instead of repeatedly updating the field as a\n> > duration) seems to provide much more precise measures of 'time\n> > elapsed' for other sessions if one step of the checkpoint is taking a\n> > long time.\n>\n> I am storing the checkpoint start timestamp in the st_progress_param[]\n> and this gets set only once during the checkpoint (at the start of the\n> checkpoint). I have added function\n> pg_stat_get_progress_checkpoint_elapsed() which calculates the elapsed\n> time and returns a string. This function gets called whenever\n> pg_stat_progress_checkpoint view is queried. Kindly refer v2 patch and\n> share your thoughts.\n\nI dislike the lack of access to the actual value of the checkpoint\nstart / checkpoint elapsed field.\n\nAs a user, if I query the pg_stat_progress_* views, my terminal or\napplication can easily interpret an `interval` value and cast it to\nstring, but the opposite is not true: the current implementation for\npg_stat_get_progress_checkpoint_elapsed loses precision. This is why\nwe use typed numeric fields in effectively all other places instead of\nstringified versions of the values: oid fields, counters, etc are all\nrendered as bigint in the view, so that no information is lost and\ninterpretation is trivial.\n\n> > I understand the want to integrate the log-based reporting in the same\n> > API, but I don't think that is necessarily the right approach:\n> > pg_stat_progress_* has low-overhead infrastructure specifically to\n> > ensure that most tasks will not run much slower while reporting, never\n> > waiting for locks. Logging, however, needs to take locks (if only to\n> > prevent concurrent writes to the output file at a kernel level) and\n> > thus has a not insignificant overhead and thus is not very useful for\n> > precise and very frequent statistics updates.\n>\n> I understand that the log based reporting is very costly and very\n> frequent updates are not advisable. I am planning to use the existing\n> infrastructure of 'log_startup_progress_interval' which provides an\n> option for the user to configure the interval between each progress\n> update. Hence it avoids frequent updates to server logs. This approach\n> is used only during shutdown and end-of-recovery cases because we\n> cannot access pg_stat_progress_checkpoint view during those scenarios.\n\nI see; but log_startup_progress_interval seems to be exclusively\nconsumed through the ereport_startup_progress macro. Why put\nstartup/shutdown logging on the same path as the happy flow of normal\ncheckpoints?\n\n> > So, although similar in nature, I don't think it is smart to use the\n> > exact same infrastructure between pgstat_progress*-based reporting and\n> > log-based progress reporting, especially if your logging-based\n> > progress reporting is not intended to be a debugging-only\n> > configuration option similar to log_min_messages=DEBUG[1..5].\n>\n> Yes. I agree that we cannot use the same infrastructure for both.\n> Progress views and servers logs have different APIs to report the\n> progress information. But since both of this are required for the same\n> purpose, I am planning to use a common function which increases the\n> code readability than calling it separately in all the scenarios. I am\n> planning to include log based reporting in the next patch. Even after\n> that if using the same function is not recommended, I am happy to\n> change.\n\nI don't think that checkpoint_progress_update_param(int, uint64) fits\nwell with the construction of progress log messages, requiring\nspecial-casing / matching the offset numbers to actual fields inside\nthat single function, which adds unnecessary overhead when compared\nagainst normal and direct calls to the related infrastructure.\n\nI think that, instead of looking to what might at some point be added,\nit is better to use the currently available functions instead, and\nmove to new functions if and when the log-based reporting requires it.\n\n\n- Matthias\n\n\n", "msg_date": "Wed, 23 Feb 2022 20:50:59 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, 23 Feb 2022 at 14:28, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Sharing the v2 patch. Kindly have a look and share your comments.\n\nThanks for updating.\n\n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n\nWith the new pg_stat_progress_checkpoint, you should also add a\nbackreference to this progress reporting in the CHECKPOINT sql command\ndocumentation located in checkpoint.sgml, and maybe in wal.sgml and/or\nbackup.sgml too. See e.g. cluster.sgml around line 195 for an example.\n\n> diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> +ImmediateCheckpointRequested(int flags)\n> if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> + {\n> + updated_flags |= CHECKPOINT_IMMEDIATE;\n\nI don't think that these changes are expected behaviour. Under in this\ncondition; the currently running checkpoint is still not 'immediate',\nbut it is going to hurry up for a new, actually immediate checkpoint.\nThose are different kinds of checkpoint handling; and I don't think\nyou should modify the reported flags to show that we're going to do\nstuff faster than usual. Maybe maintiain a seperate 'upcoming\ncheckpoint flags' field instead?\n\n> diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> + ( SELECT '0/0'::pg_lsn +\n> + ((CASE\n> + WHEN stat.lsn_int64 < 0 THEN pow(2::numeric, 64::numeric)::numeric\n> + ELSE 0::numeric\n> + END) +\n> + stat.lsn_int64::numeric)\n> + FROM (SELECT s.param3::bigint) AS stat(lsn_int64)\n> + ) AS start_lsn,\n\nMy LSN select statement was an example that could be run directly in\npsql; the so you didn't have to embed the SELECT into the view query.\nThe following should be sufficient (and save the planner a few cycles\notherwise spent in inlining):\n\n+ ('0/0'::pg_lsn +\n+ ((CASE\n+ WHEN s.param3 < 0 THEN pow(2::numeric,\n64::numeric)::numeric\n+ ELSE 0::numeric\n+ END) +\n+ s.param3::numeric)\n+ ) AS start_lsn,\n\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> +checkpoint_progress_start(int flags)\n> [...]\n> +checkpoint_progress_update_param(int index, int64 val)\n> [...]\n> +checkpoint_progress_end(void)\n> +{\n> + /* In bootstrap mode, we don't actually record anything. */\n> + if (IsBootstrapProcessingMode())\n> + return;\n\nDisabling pgstat progress reporting when in bootstrap processing mode\n/ startup/end-of-recovery makes very little sense (see upthread) and\nshould be removed, regardless of whether seperate functions stay.\n\n> diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h\n> +#define PROGRESS_CHECKPOINT_PHASE_INIT 0\n\nGenerally, enum-like values in a stat_progress field are 1-indexed, to\ndifferentiate between empty/uninitialized (0) and states that have\nbeen set by the progress reporting infrastructure.\n\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 24 Feb 2022 18:14:53 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> I think the change to ImmediateCheckpointRequested() makes no sense.\n> Before this patch, that function merely inquires whether there's an\n> immediate checkpoint queued. After this patch, it ... changes a\n> progress-reporting flag? I think it would make more sense to make the\n> progress-report flag change in whatever is the place that *requests* an\n> immediate checkpoint rather than here.\n>\n> > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > +ImmediateCheckpointRequested(int flags)\n> > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > + {\n> > + updated_flags |= CHECKPOINT_IMMEDIATE;\n>\n> I don't think that these changes are expected behaviour. Under in this\n> condition; the currently running checkpoint is still not 'immediate',\n> but it is going to hurry up for a new, actually immediate checkpoint.\n> Those are different kinds of checkpoint handling; and I don't think\n> you should modify the reported flags to show that we're going to do\n> stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> checkpoint flags' field instead?\n\nThank you Alvaro and Matthias for your views. I understand your point\nof not updating the progress-report flag here as it just checks\nwhether the CHECKPOINT_IMMEDIATE is set or not and takes an action\nbased on that but it doesn't change the checkpoint flags. I will\nmodify the code but I am a bit confused here. As per Alvaro, we need\nto make the progress-report flag change in whatever is the place that\n*requests* an immediate checkpoint. I feel this gives information\nabout the upcoming checkpoint not the current one. So updating here\nprovides wrong details in the view. The flags available during\nCreateCheckPoint() will remain same for the entire checkpoint\noperation and we should show the same information in the view till it\ncompletes. So just removing the above piece of code (modified in\nImmediateCheckpointRequested()) in the patch will make it correct. My\nopinion about maintaining a separate field to show upcoming checkpoint\nflags is it makes the view complex. Please share your thoughts.\n\nThanks & Regards,\n\nOn Thu, Feb 24, 2022 at 10:45 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 23 Feb 2022 at 14:28, Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Sharing the v2 patch. Kindly have a look and share your comments.\n>\n> Thanks for updating.\n>\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n>\n> With the new pg_stat_progress_checkpoint, you should also add a\n> backreference to this progress reporting in the CHECKPOINT sql command\n> documentation located in checkpoint.sgml, and maybe in wal.sgml and/or\n> backup.sgml too. See e.g. cluster.sgml around line 195 for an example.\n>\n> > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > +ImmediateCheckpointRequested(int flags)\n> > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > + {\n> > + updated_flags |= CHECKPOINT_IMMEDIATE;\n>\n> I don't think that these changes are expected behaviour. Under in this\n> condition; the currently running checkpoint is still not 'immediate',\n> but it is going to hurry up for a new, actually immediate checkpoint.\n> Those are different kinds of checkpoint handling; and I don't think\n> you should modify the reported flags to show that we're going to do\n> stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> checkpoint flags' field instead?\n>\n> > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > + ( SELECT '0/0'::pg_lsn +\n> > + ((CASE\n> > + WHEN stat.lsn_int64 < 0 THEN pow(2::numeric, 64::numeric)::numeric\n> > + ELSE 0::numeric\n> > + END) +\n> > + stat.lsn_int64::numeric)\n> > + FROM (SELECT s.param3::bigint) AS stat(lsn_int64)\n> > + ) AS start_lsn,\n>\n> My LSN select statement was an example that could be run directly in\n> psql; the so you didn't have to embed the SELECT into the view query.\n> The following should be sufficient (and save the planner a few cycles\n> otherwise spent in inlining):\n>\n> + ('0/0'::pg_lsn +\n> + ((CASE\n> + WHEN s.param3 < 0 THEN pow(2::numeric,\n> 64::numeric)::numeric\n> + ELSE 0::numeric\n> + END) +\n> + s.param3::numeric)\n> + ) AS start_lsn,\n>\n>\n> > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > +checkpoint_progress_start(int flags)\n> > [...]\n> > +checkpoint_progress_update_param(int index, int64 val)\n> > [...]\n> > +checkpoint_progress_end(void)\n> > +{\n> > + /* In bootstrap mode, we don't actually record anything. */\n> > + if (IsBootstrapProcessingMode())\n> > + return;\n>\n> Disabling pgstat progress reporting when in bootstrap processing mode\n> / startup/end-of-recovery makes very little sense (see upthread) and\n> should be removed, regardless of whether seperate functions stay.\n>\n> > diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h\n> > +#define PROGRESS_CHECKPOINT_PHASE_INIT 0\n>\n> Generally, enum-like values in a stat_progress field are 1-indexed, to\n> differentiate between empty/uninitialized (0) and states that have\n> been set by the progress reporting infrastructure.\n>\n>\n>\n> Kind regards,\n>\n> Matthias van de Meent\n\n\n", "msg_date": "Fri, 25 Feb 2022 00:23:27 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 25, 2022 at 12:23:27AM +0530, Nitin Jadhav wrote:\n> > I think the change to ImmediateCheckpointRequested() makes no sense.\n> > Before this patch, that function merely inquires whether there's an\n> > immediate checkpoint queued. After this patch, it ... changes a\n> > progress-reporting flag? I think it would make more sense to make the\n> > progress-report flag change in whatever is the place that *requests* an\n> > immediate checkpoint rather than here.\n> >\n> > > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > > +ImmediateCheckpointRequested(int flags)\n> > > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > > + {\n> > > + updated_flags |= CHECKPOINT_IMMEDIATE;\n> >\n> > I don't think that these changes are expected behaviour. Under in this\n> > condition; the currently running checkpoint is still not 'immediate',\n> > but it is going to hurry up for a new, actually immediate checkpoint.\n> > Those are different kinds of checkpoint handling; and I don't think\n> > you should modify the reported flags to show that we're going to do\n> > stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> > checkpoint flags' field instead?\n> \n> Thank you Alvaro and Matthias for your views. I understand your point\n> of not updating the progress-report flag here as it just checks\n> whether the CHECKPOINT_IMMEDIATE is set or not and takes an action\n> based on that but it doesn't change the checkpoint flags. I will\n> modify the code but I am a bit confused here. As per Alvaro, we need\n> to make the progress-report flag change in whatever is the place that\n> *requests* an immediate checkpoint. I feel this gives information\n> about the upcoming checkpoint not the current one. So updating here\n> provides wrong details in the view. The flags available during\n> CreateCheckPoint() will remain same for the entire checkpoint\n> operation and we should show the same information in the view till it\n> completes.\n\nI'm not sure what Matthias meant, but as far as I know there's no fundamental\ndifference between checkpoint with and without the CHECKPOINT_IMMEDIATE flag,\nand there's also no scheduling for multiple checkpoints.\n\nYes, the flags will remain the same but checkpoint.c will test both the passed\nflags and the shmem flags to see whether a delay should be added or not, which\nis the only difference in checkpoint processing for this flag. See the call to\nImmediateCheckpointRequested() which will look at the value in shmem:\n\n\t/*\n\t * Perform the usual duties and take a nap, unless we're behind schedule,\n\t * in which case we just try to catch up as quickly as possible.\n\t */\n\tif (!(flags & CHECKPOINT_IMMEDIATE) &&\n\t\t!ShutdownRequestPending &&\n\t\t!ImmediateCheckpointRequested() &&\n\t\tIsCheckpointOnSchedule(progress))\n[...]\n\n\n", "msg_date": "Fri, 25 Feb 2022 15:03:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> Thank you Alvaro and Matthias for your views. I understand your point\n> of not updating the progress-report flag here as it just checks\n> whether the CHECKPOINT_IMMEDIATE is set or not and takes an action\n> based on that but it doesn't change the checkpoint flags. I will\n> modify the code but I am a bit confused here. As per Alvaro, we need\n> to make the progress-report flag change in whatever is the place that\n> *requests* an immediate checkpoint. I feel this gives information\n> about the upcoming checkpoint not the current one. So updating here\n> provides wrong details in the view. The flags available during\n> CreateCheckPoint() will remain same for the entire checkpoint\n> operation and we should show the same information in the view till it\n> completes. So just removing the above piece of code (modified in\n> ImmediateCheckpointRequested()) in the patch will make it correct. My\n> opinion about maintaining a separate field to show upcoming checkpoint\n> flags is it makes the view complex. Please share your thoughts.\n\nI have modified the code accordingly.\n---\n\n> I think the use of capitals in CHECKPOINT and CHECKPOINTER in the\n> documentation is excessive.\n\nFixed. Here the word CHECKPOINT represents command/checkpoint\noperation. If we treat it as a checkpoint operation, I agree to use\nlowercase but if we treat it as command, then I think uppercase is\nrecommended (Refer\nhttps://www.postgresql.org/docs/14/sql-checkpoint.html). Is it ok to\nalways use lowercase here?\n---\n\n> (Same for terms such as MULTIXACT and\n> others in those docs; we typically use those in lowercase when\n> user-facing; and do we really use term CLOG anymore? Don't we call it\n> \"commit log\" nowadays?)\n\nI have observed the CLOG term in the existing documentation. Anyways I\nhave changed MULTIXACT to multixact, SUBTRANS to subtransaction and\nCLOG to commit log.\n---\n\n> + Whenever the checkpoint operation is running, the\n> + <structname>pg_stat_progress_checkpoint</structname> view will contain a\n> + single row indicating the progress of the checkpoint. The tables below\n>\n> Maybe it should show a single row , unless the checkpointer isn't running at\n> all (like in single user mode).\n\nNice thought. Can we add an additional checkpoint phase like 'Idle'.\nIdle is ON whenever the checkpointer process is running and there are\nno on-going checkpoint Thoughts?\n---\n\n> + Process ID of a CHECKPOINTER process.\n>\n> It's *the* checkpointer process.\n\nFixed.\n---\n\n> pgstatfuncs.c has a whitespace issue (tab-space).\n\nI have verified with 'git diff --check' and also manually. I did not\nfind any issue. Kindly mention the specific code which has an issue.\n---\n\n> I suppose the functions should set provolatile.\n\nFixed.\n---\n\n> > I am storing the checkpoint start timestamp in the st_progress_param[]\n> > and this gets set only once during the checkpoint (at the start of the\n> > checkpoint). I have added function\n> > pg_stat_get_progress_checkpoint_elapsed() which calculates the elapsed\n> > time and returns a string. This function gets called whenever\n> > pg_stat_progress_checkpoint view is queried. Kindly refer v2 patch and\n> > share your thoughts.\n>\n> I dislike the lack of access to the actual value of the checkpoint\n> start / checkpoint elapsed field.\n>\n> As a user, if I query the pg_stat_progress_* views, my terminal or\n> application can easily interpret an `interval` value and cast it to\n> string, but the opposite is not true: the current implementation for\n> pg_stat_get_progress_checkpoint_elapsed loses precision. This is why\n> we use typed numeric fields in effectively all other places instead of\n> stringified versions of the values: oid fields, counters, etc are all\n> rendered as bigint in the view, so that no information is lost and\n> interpretation is trivial.\n\nDisplaying start time of the checkpoint.\n---\n\n> > I understand that the log based reporting is very costly and very\n> > frequent updates are not advisable. I am planning to use the existing\n> > infrastructure of 'log_startup_progress_interval' which provides an\n> > option for the user to configure the interval between each progress\n> > update. Hence it avoids frequent updates to server logs. This approach\n> > is used only during shutdown and end-of-recovery cases because we\n> > cannot access pg_stat_progress_checkpoint view during those scenarios.\n>\n> I see; but log_startup_progress_interval seems to be exclusively\n> consumed through the ereport_startup_progress macro. Why put\n> startup/shutdown logging on the same path as the happy flow of normal\n> checkpoints?\n\nYou mean to say while updating the progress of the checkpoint, call\npgstat_progress_update_param() and then call\nereport_startup_progress() ?\n\n> I think that, instead of looking to what might at some point be added,\n> it is better to use the currently available functions instead, and\n> move to new functions if and when the log-based reporting requires it.\n\nMake sense. Removing checkpoint_progress_update_param() and\ncheckpoint_progress_end(). I would like to concentrate on\npg_stat_progress_checkpoint view as of now and I will consider log\nbased reporting later.\n\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n>\n> With the new pg_stat_progress_checkpoint, you should also add a\n> backreference to this progress reporting in the CHECKPOINT sql command\n> documentation located in checkpoint.sgml, and maybe in wal.sgml and/or\n> backup.sgml too. See e.g. cluster.sgml around line 195 for an example.\n\nI have updated in checkpoint.sqml and wal.sqml.\n\n> > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > + ( SELECT '0/0'::pg_lsn +\n> > + ((CASE\n> > + WHEN stat.lsn_int64 < 0 THEN pow(2::numeric, 64::numeric)::numeric\n> > + ELSE 0::numeric\n> > + END) +\n> > + stat.lsn_int64::numeric)\n> > + FROM (SELECT s.param3::bigint) AS stat(lsn_int64)\n> > + ) AS start_lsn,\n>\n> My LSN select statement was an example that could be run directly in\n> psql; the so you didn't have to embed the SELECT into the view query.\n> The following should be sufficient (and save the planner a few cycles\n> otherwise spent in inlining):\n>\n> + ('0/0'::pg_lsn +\n> + ((CASE\n> + WHEN s.param3 < 0 THEN pow(2::numeric,\n> 64::numeric)::numeric\n> + ELSE 0::numeric\n> + END) +\n> + s.param3::numeric)\n> + ) AS start_lsn,\n\nThanks for the suggestion. Fixed.\n\n> > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > +checkpoint_progress_start(int flags)\n> > [...]\n> > +checkpoint_progress_update_param(int index, int64 val)\n> > [...]\n> > +checkpoint_progress_end(void)\n> > +{\n> > + /* In bootstrap mode, we don't actually record anything. */\n> > + if (IsBootstrapProcessingMode())\n> > + return;\n>\n> Disabling pgstat progress reporting when in bootstrap processing mode\n> / startup/end-of-recovery makes very little sense (see upthread) and\n> should be removed, regardless of whether seperate functions stay.\n\nRemoved since log based reporting is not part of the current patch.\n\n> > diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h\n> > +#define PROGRESS_CHECKPOINT_PHASE_INIT 0\n>\n> Generally, enum-like values in a stat_progress field are 1-indexed, to\n> differentiate between empty/uninitialized (0) and states that have\n> been set by the progress reporting infrastructure.\n\nFixed.\n\nPlease find the v3 patch attached and share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\nOn Fri, Feb 25, 2022 at 12:23 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > I think the change to ImmediateCheckpointRequested() makes no sense.\n> > Before this patch, that function merely inquires whether there's an\n> > immediate checkpoint queued. After this patch, it ... changes a\n> > progress-reporting flag? I think it would make more sense to make the\n> > progress-report flag change in whatever is the place that *requests* an\n> > immediate checkpoint rather than here.\n> >\n> > > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > > +ImmediateCheckpointRequested(int flags)\n> > > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > > + {\n> > > + updated_flags |= CHECKPOINT_IMMEDIATE;\n> >\n> > I don't think that these changes are expected behaviour. Under in this\n> > condition; the currently running checkpoint is still not 'immediate',\n> > but it is going to hurry up for a new, actually immediate checkpoint.\n> > Those are different kinds of checkpoint handling; and I don't think\n> > you should modify the reported flags to show that we're going to do\n> > stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> > checkpoint flags' field instead?\n>\n> Thank you Alvaro and Matthias for your views. I understand your point\n> of not updating the progress-report flag here as it just checks\n> whether the CHECKPOINT_IMMEDIATE is set or not and takes an action\n> based on that but it doesn't change the checkpoint flags. I will\n> modify the code but I am a bit confused here. As per Alvaro, we need\n> to make the progress-report flag change in whatever is the place that\n> *requests* an immediate checkpoint. I feel this gives information\n> about the upcoming checkpoint not the current one. So updating here\n> provides wrong details in the view. The flags available during\n> CreateCheckPoint() will remain same for the entire checkpoint\n> operation and we should show the same information in the view till it\n> completes. So just removing the above piece of code (modified in\n> ImmediateCheckpointRequested()) in the patch will make it correct. My\n> opinion about maintaining a separate field to show upcoming checkpoint\n> flags is it makes the view complex. Please share your thoughts.\n>\n> Thanks & Regards,\n>\n> On Thu, Feb 24, 2022 at 10:45 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Wed, 23 Feb 2022 at 14:28, Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > Sharing the v2 patch. Kindly have a look and share your comments.\n> >\n> > Thanks for updating.\n> >\n> > > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> >\n> > With the new pg_stat_progress_checkpoint, you should also add a\n> > backreference to this progress reporting in the CHECKPOINT sql command\n> > documentation located in checkpoint.sgml, and maybe in wal.sgml and/or\n> > backup.sgml too. See e.g. cluster.sgml around line 195 for an example.\n> >\n> > > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > > +ImmediateCheckpointRequested(int flags)\n> > > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > > + {\n> > > + updated_flags |= CHECKPOINT_IMMEDIATE;\n> >\n> > I don't think that these changes are expected behaviour. Under in this\n> > condition; the currently running checkpoint is still not 'immediate',\n> > but it is going to hurry up for a new, actually immediate checkpoint.\n> > Those are different kinds of checkpoint handling; and I don't think\n> > you should modify the reported flags to show that we're going to do\n> > stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> > checkpoint flags' field instead?\n> >\n> > > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > > + ( SELECT '0/0'::pg_lsn +\n> > > + ((CASE\n> > > + WHEN stat.lsn_int64 < 0 THEN pow(2::numeric, 64::numeric)::numeric\n> > > + ELSE 0::numeric\n> > > + END) +\n> > > + stat.lsn_int64::numeric)\n> > > + FROM (SELECT s.param3::bigint) AS stat(lsn_int64)\n> > > + ) AS start_lsn,\n> >\n> > My LSN select statement was an example that could be run directly in\n> > psql; the so you didn't have to embed the SELECT into the view query.\n> > The following should be sufficient (and save the planner a few cycles\n> > otherwise spent in inlining):\n> >\n> > + ('0/0'::pg_lsn +\n> > + ((CASE\n> > + WHEN s.param3 < 0 THEN pow(2::numeric,\n> > 64::numeric)::numeric\n> > + ELSE 0::numeric\n> > + END) +\n> > + s.param3::numeric)\n> > + ) AS start_lsn,\n> >\n> >\n> > > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > > +checkpoint_progress_start(int flags)\n> > > [...]\n> > > +checkpoint_progress_update_param(int index, int64 val)\n> > > [...]\n> > > +checkpoint_progress_end(void)\n> > > +{\n> > > + /* In bootstrap mode, we don't actually record anything. */\n> > > + if (IsBootstrapProcessingMode())\n> > > + return;\n> >\n> > Disabling pgstat progress reporting when in bootstrap processing mode\n> > / startup/end-of-recovery makes very little sense (see upthread) and\n> > should be removed, regardless of whether seperate functions stay.\n> >\n> > > diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h\n> > > +#define PROGRESS_CHECKPOINT_PHASE_INIT 0\n> >\n> > Generally, enum-like values in a stat_progress field are 1-indexed, to\n> > differentiate between empty/uninitialized (0) and states that have\n> > been set by the progress reporting infrastructure.\n> >\n> >\n> >\n> > Kind regards,\n> >\n> > Matthias van de Meent", "msg_date": "Fri, 25 Feb 2022 20:26:27 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> + if ((ckpt_flags &\n> + (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY)) == 0)\n> + {\n>\n> This code (present at multiple places) looks a little ugly to me, what\n> we can do instead is add a macro probably named IsShutdownCheckpoint()\n> which does the above check and use it in all the functions that have\n> this check. See below:\n>\n> #define IsShutdownCheckpoint(flags) \\\n> (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY) != 0)\n>\n> And then you may use this macro like:\n>\n> if (IsBootstrapProcessingMode() || IsShutdownCheckpoint(flags))\n> return;\n\nGood suggestion. In the v3 patch, I have removed the corresponding\ncode as these checks are not required. Hence this suggestion is not\napplicable now.\n---\n\n> pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT,\n> InvalidOid);\n> +\n> + val[0] = XLogCtl->InsertTimeLineID;\n> + val[1] = flags;\n> + val[2] = PROGRESS_CHECKPOINT_PHASE_INIT;\n> + val[3] = CheckpointStats.ckpt_start_t;\n> +\n> + pgstat_progress_update_multi_param(4, index, val);\n> + }\n>\n> Any specific reason for recording the timelineID in checkpoint stats\n> table? Will this ever change in our case?\n\nThe timelineID is used to decide whether the current operation is\ncheckpoint or restartpoint. There is a field in the view to display\nthis information.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Feb 23, 2022 at 9:46 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> + if ((ckpt_flags &\n> + (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY)) == 0)\n> + {\n>\n> This code (present at multiple places) looks a little ugly to me, what\n> we can do instead is add a macro probably named IsShutdownCheckpoint()\n> which does the above check and use it in all the functions that have\n> this check. See below:\n>\n> #define IsShutdownCheckpoint(flags) \\\n> (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY) != 0)\n>\n> And then you may use this macro like:\n>\n> if (IsBootstrapProcessingMode() || IsShutdownCheckpoint(flags))\n> return;\n>\n> This change can be done in all these functions:\n>\n> +void\n> +checkpoint_progress_start(int flags)\n>\n> --\n>\n> + */\n> +void\n> +checkpoint_progress_update_param(int index, int64 val)\n>\n> --\n>\n> + * Stop reporting progress of the checkpoint.\n> + */\n> +void\n> +checkpoint_progress_end(void)\n>\n> ==\n>\n> +\n> pgstat_progress_start_command(PROGRESS_COMMAND_CHECKPOINT,\n> InvalidOid);\n> +\n> + val[0] = XLogCtl->InsertTimeLineID;\n> + val[1] = flags;\n> + val[2] = PROGRESS_CHECKPOINT_PHASE_INIT;\n> + val[3] = CheckpointStats.ckpt_start_t;\n> +\n> + pgstat_progress_update_multi_param(4, index, val);\n> + }\n>\n> Any specific reason for recording the timelineID in checkpoint stats\n> table? Will this ever change in our case?\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n> On Wed, Feb 23, 2022 at 6:59 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > I will make use of pgstat_progress_update_multi_param() in the next\n> > > patch to replace multiple calls to checkpoint_progress_update_param().\n> >\n> > Fixed.\n> > ---\n> >\n> > > > The other progress tables use [type]_total as column names for counter\n> > > > targets (e.g. backup_total for backup_streamed, heap_blks_total for\n> > > > heap_blks_scanned, etc.). I think that `buffers_total` and\n> > > > `files_total` would be better column names.\n> > >\n> > > I agree and I will update this in the next patch.\n> >\n> > Fixed.\n> > ---\n> >\n> > > How about this \"The checkpoint is started because max_wal_size is reached\".\n> > >\n> > > \"The checkpoint is started because checkpoint_timeout expired\".\n> > >\n> > > \"The checkpoint is started because some operation forced a checkpoint\".\n> >\n> > I have used the above description. Kindly let me know if any changes\n> > are required.\n> > ---\n> >\n> > > > > + <entry><literal>checkpointing CommitTs pages</literal></entry>\n> > > >\n> > > > CommitTs -> Commit time stamp\n> > >\n> > > I will handle this in the next patch.\n> >\n> > Fixed.\n> > ---\n> >\n> > > There are more scenarios where you can have a baackend requesting a checkpoint\n> > > and waiting for its completion, and there may be more than one backend\n> > > concerned, so I don't think that storing only one / the first backend pid is\n> > > ok.\n> >\n> > Thanks for this information. I am not considering backend_pid.\n> > ---\n> >\n> > > I think all the information should be exposed. Only knowing why the current\n> > > checkpoint has been triggered without any further information seems a bit\n> > > useless. Think for instance for cases like [1].\n> >\n> > I have supported all possible checkpoint kinds. Added\n> > pg_stat_get_progress_checkpoint_kind() to convert the flags (int) to a\n> > string representing a combination of flags and also checking for the\n> > flag update in ImmediateCheckpointRequested() which checks whether\n> > CHECKPOINT_IMMEDIATE flag is set or not. I did not find any other\n> > cases where the flags get changed (which changes the current\n> > checkpoint behaviour) during the checkpoint. Kindly let me know if I\n> > am missing something.\n> > ---\n> >\n> > > > I feel 'processes_wiating' aligns more with the naming conventions of\n> > > > the fields of the existing progres views.\n> > >\n> > > There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> > > a strong opinion on it, just make sure to correct the typo.\n> >\n> > More analysis is required to support this. I am planning to take care\n> > in the next patch.\n> > ---\n> >\n> > > If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> > > restartpoint if the checkpoint's timeline is different from the current\n> > > timeline?\n> >\n> > Fixed.\n> >\n> > Sharing the v2 patch. Kindly have a look and share your comments.\n> >\n> > Thanks & Regards,\n> > Nitin Jadhav\n> >\n> >\n> >\n> >\n> > On Tue, Feb 22, 2022 at 12:08 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > > > Thank you for sharing the information. 'triggering backend PID' (int)\n> > > > > - can be stored without any problem. 'checkpoint or restartpoint?'\n> > > > > (boolean) - can be stored as a integer value like\n> > > > > PROGRESS_CHECKPOINT_TYPE_CHECKPOINT(0) and\n> > > > > PROGRESS_CHECKPOINT_TYPE_RESTARTPOINT(1). 'elapsed time' (store as\n> > > > > start time in stat_progress, timestamp fits in 64 bits) - As\n> > > > > Timestamptz is of type int64 internally, so we can store the timestamp\n> > > > > value in the progres parameter and then expose a function like\n> > > > > 'pg_stat_get_progress_checkpoint_elapsed' which takes int64 (not\n> > > > > Timestamptz) as argument and then returns string representing the\n> > > > > elapsed time.\n> > > >\n> > > > No need to use a string there; I think exposing the checkpoint start\n> > > > time is good enough. The conversion of int64 to timestamp[tz] can be\n> > > > done in SQL (although I'm not sure that exposing the internal bitwise\n> > > > representation of Interval should be exposed to that extent) [0].\n> > > > Users can then extract the duration interval using now() - start_time,\n> > > > which also allows the user to use their own preferred formatting.\n> > >\n> > > The reason for showing the elapsed time rather than exposing the\n> > > timestamp directly is in case of checkpoint during shutdown and\n> > > end-of-recovery, I am planning to log a message in server logs using\n> > > 'log_startup_progress_interval' infrastructure which displays elapsed\n> > > time. So just to match both of the behaviour I am displaying elapsed\n> > > time here. I feel that elapsed time gives a quicker feel of the\n> > > progress. Kindly let me know if you still feel just exposing the\n> > > timestamp is better than showing the elapsed time.\n> > >\n> > > > > 'checkpoint start location' (lsn = uint64) - I feel we\n> > > > > cannot use progress parameters for this case. As assigning uint64 to\n> > > > > int64 type would be an issue for larger values and can lead to hidden\n> > > > > bugs.\n> > > >\n> > > > Not necessarily - we can (without much trouble) do a bitwise cast from\n> > > > uint64 to int64, and then (in SQL) cast it back to a pg_lsn [1]. Not\n> > > > very elegant, but it works quite well.\n> > > >\n> > > > [1] SELECT '0/0'::pg_lsn + ((CASE WHEN stat.my_int64 < 0 THEN\n> > > > pow(2::numeric, 64::numeric)::numeric ELSE 0::numeric END) +\n> > > > stat.my_int64::numeric) FROM (SELECT -2::bigint /* 0xFFFFFFFF/FFFFFFFE\n> > > > */ AS my_bigint_lsn) AS stat(my_int64);\n> > >\n> > > Thanks for sharing. It works. I will include this in the next patch.\n> > > On Sat, Feb 19, 2022 at 11:02 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > On Fri, Feb 18, 2022 at 08:07:05PM +0530, Nitin Jadhav wrote:\n> > > > >\n> > > > > The backend_pid contains a valid value only during\n> > > > > the CHECKPOINT command issued by the backend explicitly, otherwise the\n> > > > > value will be 0. We may have to add an additional field to\n> > > > > 'CheckpointerShmemStruct' to hold the backend pid. The backend\n> > > > > requesting the checkpoint will update its pid to this structure.\n> > > > > Kindly let me know if you still feel the backend_pid field is not\n> > > > > necessary.\n> > > >\n> > > > There are more scenarios where you can have a baackend requesting a checkpoint\n> > > > and waiting for its completion, and there may be more than one backend\n> > > > concerned, so I don't think that storing only one / the first backend pid is\n> > > > ok.\n> > > >\n> > > > > > And also while looking at the patch I see there's the same problem that I\n> > > > > > mentioned in the previous thread, which is that the effective flags can be\n> > > > > > updated once the checkpoint started, and as-is the view won't reflect that. It\n> > > > > > also means that you can't simply display one of wal, time or force but a\n> > > > > > possible combination of the flags (including the one not handled in v1).\n> > > > >\n> > > > > If I understand the above comment properly, it has 2 points. First is\n> > > > > to display the combination of flags rather than just displaying wal,\n> > > > > time or force - The idea behind this is to just let the user know the\n> > > > > reason for checkpointing. That is, the checkpoint is started because\n> > > > > max_wal_size is reached or checkpoint_timeout expired or explicitly\n> > > > > issued CHECKPOINT command. The other flags like CHECKPOINT_IMMEDIATE,\n> > > > > CHECKPOINT_WAIT or CHECKPOINT_FLUSH_ALL indicate how the checkpoint\n> > > > > has to be performed. Hence I have not included those in the view. If\n> > > > > it is really required, I would like to modify the code to include\n> > > > > other flags and display the combination.\n> > > >\n> > > > I think all the information should be exposed. Only knowing why the current\n> > > > checkpoint has been triggered without any further information seems a bit\n> > > > useless. Think for instance for cases like [1].\n> > > >\n> > > > > Second point is to reflect\n> > > > > the updated flags in the view. AFAIK, there is a possibility that the\n> > > > > flags get updated during the on-going checkpoint but the reason for\n> > > > > checkpoint (wal, time or force) will remain same for the current\n> > > > > checkpoint. There might be a change in how checkpoint has to be\n> > > > > performed if CHECKPOINT_IMMEDIATE flag is set. If we go with\n> > > > > displaying the combination of flags in the view, then probably we may\n> > > > > have to reflect this in the view.\n> > > >\n> > > > You can only \"upgrade\" a checkpoint, but not \"downgrade\" it. So if for\n> > > > instance you find both CHECKPOINT_CAUSE_TIME and CHECKPOINT_FORCE (which is\n> > > > possible) you can easily know which one was the one that triggered the\n> > > > checkpoint and which one was added later.\n> > > >\n> > > > > > > Probably a new field named 'processes_wiating' or 'events_waiting' can be\n> > > > > > > added for this purpose.\n> > > > > >\n> > > > > > Maybe num_process_waiting?\n> > > > >\n> > > > > I feel 'processes_wiating' aligns more with the naming conventions of\n> > > > > the fields of the existing progres views.\n> > > >\n> > > > There's at least pg_stat_progress_vacuum.num_dead_tuples. Anyway I don't have\n> > > > a strong opinion on it, just make sure to correct the typo.\n> > > >\n> > > > > > > Probably writing of buffers or syncing files may complete before\n> > > > > > > pg_is_in_recovery() returns false. But there are some cleanup\n> > > > > > > operations happen as part of the checkpoint. During this scenario, we\n> > > > > > > may get false value for pg_is_in_recovery(). Please refer following\n> > > > > > > piece of code which is present in CreateRestartpoint().\n> > > > > > >\n> > > > > > > if (!RecoveryInProgress())\n> > > > > > > replayTLI = XLogCtl->InsertTimeLineID;\n> > > > > >\n> > > > > > Then maybe we could store the timeline rather then then kind of checkpoint?\n> > > > > > You should still be able to compute the information while giving a bit more\n> > > > > > information for the same memory usage.\n> > > > >\n> > > > > Can you please describe more about how checkpoint/restartpoint can be\n> > > > > confirmed using the timeline id.\n> > > >\n> > > > If pg_is_in_recovery() is true, then it's a restartpoint, otherwise it's a\n> > > > restartpoint if the checkpoint's timeline is different from the current\n> > > > timeline?\n> > > >\n> > > > [1] https://www.postgresql.org/message-id/1486805889.24568.96.camel%40credativ.de\n\n\n", "msg_date": "Fri, 25 Feb 2022 20:37:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > Thank you Alvaro and Matthias for your views. I understand your point\n> > of not updating the progress-report flag here as it just checks\n> > whether the CHECKPOINT_IMMEDIATE is set or not and takes an action\n> > based on that but it doesn't change the checkpoint flags. I will\n> > modify the code but I am a bit confused here. As per Alvaro, we need\n> > to make the progress-report flag change in whatever is the place that\n> > *requests* an immediate checkpoint. I feel this gives information\n> > about the upcoming checkpoint not the current one. So updating here\n> > provides wrong details in the view. The flags available during\n> > CreateCheckPoint() will remain same for the entire checkpoint\n> > operation and we should show the same information in the view till it\n> > completes.\n>\n> I'm not sure what Matthias meant, but as far as I know there's no fundamental\n> difference between checkpoint with and without the CHECKPOINT_IMMEDIATE flag,\n> and there's also no scheduling for multiple checkpoints.\n>\n> Yes, the flags will remain the same but checkpoint.c will test both the passed\n> flags and the shmem flags to see whether a delay should be added or not, which\n> is the only difference in checkpoint processing for this flag. See the call to\n> ImmediateCheckpointRequested() which will look at the value in shmem:\n>\n> /*\n> * Perform the usual duties and take a nap, unless we're behind schedule,\n> * in which case we just try to catch up as quickly as possible.\n> */\n> if (!(flags & CHECKPOINT_IMMEDIATE) &&\n> !ShutdownRequestPending &&\n> !ImmediateCheckpointRequested() &&\n> IsCheckpointOnSchedule(progress))\n\nI understand that the checkpointer considers flags as well as the\nshmem flags and if CHECKPOINT_IMMEDIATE flag is set, it affects the\ncurrent checkpoint operation (No further delay) but does not change\nthe current flag value. Should we display this change in the kind\nfield of the view or not? Please share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Feb 25, 2022 at 12:33 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Feb 25, 2022 at 12:23:27AM +0530, Nitin Jadhav wrote:\n> > > I think the change to ImmediateCheckpointRequested() makes no sense.\n> > > Before this patch, that function merely inquires whether there's an\n> > > immediate checkpoint queued. After this patch, it ... changes a\n> > > progress-reporting flag? I think it would make more sense to make the\n> > > progress-report flag change in whatever is the place that *requests* an\n> > > immediate checkpoint rather than here.\n> > >\n> > > > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > > > +ImmediateCheckpointRequested(int flags)\n> > > > if (cps->ckpt_flags & CHECKPOINT_IMMEDIATE)\n> > > > + {\n> > > > + updated_flags |= CHECKPOINT_IMMEDIATE;\n> > >\n> > > I don't think that these changes are expected behaviour. Under in this\n> > > condition; the currently running checkpoint is still not 'immediate',\n> > > but it is going to hurry up for a new, actually immediate checkpoint.\n> > > Those are different kinds of checkpoint handling; and I don't think\n> > > you should modify the reported flags to show that we're going to do\n> > > stuff faster than usual. Maybe maintiain a seperate 'upcoming\n> > > checkpoint flags' field instead?\n> >\n> > Thank you Alvaro and Matthias for your views. I understand your point\n> > of not updating the progress-report flag here as it just checks\n> > whether the CHECKPOINT_IMMEDIATE is set or not and takes an action\n> > based on that but it doesn't change the checkpoint flags. I will\n> > modify the code but I am a bit confused here. As per Alvaro, we need\n> > to make the progress-report flag change in whatever is the place that\n> > *requests* an immediate checkpoint. I feel this gives information\n> > about the upcoming checkpoint not the current one. So updating here\n> > provides wrong details in the view. The flags available during\n> > CreateCheckPoint() will remain same for the entire checkpoint\n> > operation and we should show the same information in the view till it\n> > completes.\n>\n> I'm not sure what Matthias meant, but as far as I know there's no fundamental\n> difference between checkpoint with and without the CHECKPOINT_IMMEDIATE flag,\n> and there's also no scheduling for multiple checkpoints.\n>\n> Yes, the flags will remain the same but checkpoint.c will test both the passed\n> flags and the shmem flags to see whether a delay should be added or not, which\n> is the only difference in checkpoint processing for this flag. See the call to\n> ImmediateCheckpointRequested() which will look at the value in shmem:\n>\n> /*\n> * Perform the usual duties and take a nap, unless we're behind schedule,\n> * in which case we just try to catch up as quickly as possible.\n> */\n> if (!(flags & CHECKPOINT_IMMEDIATE) &&\n> !ShutdownRequestPending &&\n> !ImmediateCheckpointRequested() &&\n> IsCheckpointOnSchedule(progress))\n> [...]\n\n\n", "msg_date": "Fri, 25 Feb 2022 20:53:50 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Feb 25, 2022 at 08:53:50PM +0530, Nitin Jadhav wrote:\n> >\n> > I'm not sure what Matthias meant, but as far as I know there's no fundamental\n> > difference between checkpoint with and without the CHECKPOINT_IMMEDIATE flag,\n> > and there's also no scheduling for multiple checkpoints.\n> >\n> > Yes, the flags will remain the same but checkpoint.c will test both the passed\n> > flags and the shmem flags to see whether a delay should be added or not, which\n> > is the only difference in checkpoint processing for this flag. See the call to\n> > ImmediateCheckpointRequested() which will look at the value in shmem:\n> >\n> > /*\n> > * Perform the usual duties and take a nap, unless we're behind schedule,\n> > * in which case we just try to catch up as quickly as possible.\n> > */\n> > if (!(flags & CHECKPOINT_IMMEDIATE) &&\n> > !ShutdownRequestPending &&\n> > !ImmediateCheckpointRequested() &&\n> > IsCheckpointOnSchedule(progress))\n> \n> I understand that the checkpointer considers flags as well as the\n> shmem flags and if CHECKPOINT_IMMEDIATE flag is set, it affects the\n> current checkpoint operation (No further delay) but does not change\n> the current flag value. Should we display this change in the kind\n> field of the view or not? Please share your thoughts.\n\nI think the fields should be added. It's good to know that a checkpoint was\ntrigger due to normal activity and should be spreaded, and then something\nupgraded it to an immediate checkpoint. If you're desperately waiting for the\nend of a checkpoint for some reason and ask for an immediate checkpoint, you'll\ncertainly be happy to see that the checkpointer is aware of it.\n\nBut maybe I missed something in the code, so let's wait for Matthias input\nabout it.\n\n\n", "msg_date": "Sat, 26 Feb 2022 00:35:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, 25 Feb 2022 at 17:35, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Feb 25, 2022 at 08:53:50PM +0530, Nitin Jadhav wrote:\n> > >\n> > > I'm not sure what Matthias meant, but as far as I know there's no fundamental\n> > > difference between checkpoint with and without the CHECKPOINT_IMMEDIATE flag,\n> > > and there's also no scheduling for multiple checkpoints.\n> > >\n> > > Yes, the flags will remain the same but checkpoint.c will test both the passed\n> > > flags and the shmem flags to see whether a delay should be added or not, which\n> > > is the only difference in checkpoint processing for this flag. See the call to\n> > > ImmediateCheckpointRequested() which will look at the value in shmem:\n> > >\n> > > /*\n> > > * Perform the usual duties and take a nap, unless we're behind schedule,\n> > > * in which case we just try to catch up as quickly as possible.\n> > > */\n> > > if (!(flags & CHECKPOINT_IMMEDIATE) &&\n> > > !ShutdownRequestPending &&\n> > > !ImmediateCheckpointRequested() &&\n> > > IsCheckpointOnSchedule(progress))\n> >\n> > I understand that the checkpointer considers flags as well as the\n> > shmem flags and if CHECKPOINT_IMMEDIATE flag is set, it affects the\n> > current checkpoint operation (No further delay) but does not change\n> > the current flag value. Should we display this change in the kind\n> > field of the view or not? Please share your thoughts.\n>\n> I think the fields should be added. It's good to know that a checkpoint was\n> trigger due to normal activity and should be spreaded, and then something\n> upgraded it to an immediate checkpoint. If you're desperately waiting for the\n> end of a checkpoint for some reason and ask for an immediate checkpoint, you'll\n> certainly be happy to see that the checkpointer is aware of it.\n>\n> But maybe I missed something in the code, so let's wait for Matthias input\n> about it.\n\nThe point I was trying to make was \"If cps->ckpt_flags is\nCHECKPOINT_IMMEDIATE, we hurry up to start the new checkpoint that is\nactually immediate\". That doesn't mean that this checkpoint was\ncreated with IMMEDIATE or running using IMMEDIATE, only that optional\ndelays are now being skipped instead.\n\nTo let the user detect _why_ the optional delays are now being\nskipped, I propose not to report this currently running checkpoint's\n\"flags | CHECKPOINT_IMMEDIATE\", but to add reporting of the next\ncheckpoint's flags; which would allow the detection and display of the\nCHECKPOINT_IMMEDIATE we're actually hurrying for (plus some more\ninteresting information flags.\n\n-Matthias\n\nPS. I just noticed that the checkpoint flags are also being parsed and\nstringified twice in LogCheckpointStart; and adding another duplicate\nin the current code would put that at 3 copies of effectively the same\ncode. Do we maybe want to deduplicate that into macros, similar to\nLSN_FORMAT_ARGS?\n\n\n", "msg_date": "Fri, 25 Feb 2022 18:49:42 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Feb 25, 2022 at 06:49:42PM +0100, Matthias van de Meent wrote:\n>\n> The point I was trying to make was \"If cps->ckpt_flags is\n> CHECKPOINT_IMMEDIATE, we hurry up to start the new checkpoint that is\n> actually immediate\". That doesn't mean that this checkpoint was\n> created with IMMEDIATE or running using IMMEDIATE, only that optional\n> delays are now being skipped instead.\n\nAh, I now see what you mean.\n\n> To let the user detect _why_ the optional delays are now being\n> skipped, I propose not to report this currently running checkpoint's\n> \"flags | CHECKPOINT_IMMEDIATE\", but to add reporting of the next\n> checkpoint's flags; which would allow the detection and display of the\n> CHECKPOINT_IMMEDIATE we're actually hurrying for (plus some more\n> interesting information flags.\n\nI'm still not convinced that's a sensible approach. The next checkpoint will\nbe displayed in the view as CHECKPOINT_IMMEDIATE, so you will then know about\nit. I'm not sure that having that specific information in the view is\ngoing to help, especially if users have to understand \"a slow checkpoint is\nactually fast even if it's displayed as slow if the next checkpoint is going to\nbe fast\". Saying \"it's timed\" (which imply slow) and \"it's fast\" is maybe\nstill counter intuitive, but at least have a better chance to see there's\nsomething going on and refer to the doc if you don't get it.\n\n\n", "msg_date": "Sat, 26 Feb 2022 02:30:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Sat, Feb 26, 2022 at 02:30:36AM +0800, Julien Rouhaud wrote:\n> On Fri, Feb 25, 2022 at 06:49:42PM +0100, Matthias van de Meent wrote:\n> >\n> > The point I was trying to make was \"If cps->ckpt_flags is\n> > CHECKPOINT_IMMEDIATE, we hurry up to start the new checkpoint that is\n> > actually immediate\". That doesn't mean that this checkpoint was\n> > created with IMMEDIATE or running using IMMEDIATE, only that optional\n> > delays are now being skipped instead.\n> \n> Ah, I now see what you mean.\n> \n> > To let the user detect _why_ the optional delays are now being\n> > skipped, I propose not to report this currently running checkpoint's\n> > \"flags | CHECKPOINT_IMMEDIATE\", but to add reporting of the next\n> > checkpoint's flags; which would allow the detection and display of the\n> > CHECKPOINT_IMMEDIATE we're actually hurrying for (plus some more\n> > interesting information flags.\n> \n> I'm still not convinced that's a sensible approach. The next checkpoint will\n> be displayed in the view as CHECKPOINT_IMMEDIATE, so you will then know about\n> it. I'm not sure that having that specific information in the view is\n> going to help, especially if users have to understand \"a slow checkpoint is\n> actually fast even if it's displayed as slow if the next checkpoint is going to\n> be fast\". Saying \"it's timed\" (which imply slow) and \"it's fast\" is maybe\n> still counter intuitive, but at least have a better chance to see there's\n> something going on and refer to the doc if you don't get it.\n\nJust to be clear, I do think that it's worthwhile to add some information that\nsome backends are waiting for that next checkpoint. As discussed before, an\nint for the number of backends looks like enough information to me.\n\n\n", "msg_date": "Sat, 26 Feb 2022 02:37:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Feb 25, 2022 at 8:38 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n\nHad a quick look over the v3 patch. I'm not sure if it's the best way\nto have pg_stat_get_progress_checkpoint_type,\npg_stat_get_progress_checkpoint_kind and\npg_stat_get_progress_checkpoint_start_time just for printing info in\nreadable format in pg_stat_progress_checkpoint. I don't think these\nfunctions will ever be useful for the users.\n\n1) Can't we use pg_is_in_recovery to determine if it's a restartpoint\nor checkpoint instead of having a new function\npg_stat_get_progress_checkpoint_type?\n\n2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\ndirectly instead of new function pg_stat_get_progress_checkpoint_kind?\n+ snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n+ (flags == 0) ? \"unknown\" : \"\",\n+ (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n+ (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n+ (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n+ (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n+ (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n+ (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n+ (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n+ (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n\n3) Why do we need this extra calculation for start_lsn? Do you ever\nsee a negative LSN or something here?\n+ ('0/0'::pg_lsn + (\n+ CASE\n+ WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n+ ELSE (0)::numeric\n+ END + (s.param3)::numeric)) AS start_lsn,\n\n4) Can't you use timestamptz_in(to_char(s.param4)) instead of\npg_stat_get_progress_checkpoint_start_time? I don't quite understand\nthe reasoning for having this function and it's named as *checkpoint*\nwhen it doesn't do anything specific to the checkpoint at all?\n\nHaving 3 unnecessary functions that aren't useful to the users at all\nin proc.dat will simply eatup the function oids IMO. Hence, I suggest\nlet's try to do without extra functions.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 27 Feb 2022 20:44:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Sun, Feb 27, 2022 at 8:44 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 25, 2022 at 8:38 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n>\n> Had a quick look over the v3 patch. I'm not sure if it's the best way\n> to have pg_stat_get_progress_checkpoint_type,\n> pg_stat_get_progress_checkpoint_kind and\n> pg_stat_get_progress_checkpoint_start_time just for printing info in\n> readable format in pg_stat_progress_checkpoint. I don't think these\n> functions will ever be useful for the users.\n>\n> 1) Can't we use pg_is_in_recovery to determine if it's a restartpoint\n> or checkpoint instead of having a new function\n> pg_stat_get_progress_checkpoint_type?\n>\n> 2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\n> directly instead of new function pg_stat_get_progress_checkpoint_kind?\n> + snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> + (flags == 0) ? \"unknown\" : \"\",\n> + (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> + (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> + (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> + (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> + (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> + (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> + (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> + (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n>\n> 3) Why do we need this extra calculation for start_lsn? Do you ever\n> see a negative LSN or something here?\n> + ('0/0'::pg_lsn + (\n> + CASE\n> + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> + ELSE (0)::numeric\n> + END + (s.param3)::numeric)) AS start_lsn,\n>\n> 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> the reasoning for having this function and it's named as *checkpoint*\n> when it doesn't do anything specific to the checkpoint at all?\n>\n> Having 3 unnecessary functions that aren't useful to the users at all\n> in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> let's try to do without extra functions.\n\nAnother thought for my review comment:\n> 1) Can't we use pg_is_in_recovery to determine if it's a restartpoint\n> or checkpoint instead of having a new function\n> pg_stat_get_progress_checkpoint_type?\n\nI don't think using pg_is_in_recovery work here as it is taken after\nthe checkpoint has started. So, I think the right way here is to send\n1 in CreateCheckPoint and 2 in CreateRestartPoint and use\nCASE-WHEN-ELSE-END to show \"1\": \"checkpoint\" \"2\":\"restartpoint\".\n\nContinuing my review:\n\n5) Do we need a special phase for this checkpoint operation? I'm not\nsure in which cases it will take a long time, but it looks like\nthere's a wait loop here.\nvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\nif (nvxids > 0)\n{\ndo\n{\npg_usleep(10000L); /* wait for 10 msec */\n} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n}\n\nAlso, how about special phases for SyncPostCheckpoint(),\nSyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\nPreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\nit might be increase in future (?)), TruncateSUBTRANS()?\n\n6) SLRU (Simple LRU) isn't a phase here, you can just say\nPROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES.\n+\n+ pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n+ PROGRESS_CHECKPOINT_PHASE_SLRU_PAGES);\n CheckPointPredicate();\n\nAnd :s/checkpointing SLRU pages/checkpointing predicate lock pages\n+ WHEN 9 THEN 'checkpointing SLRU pages'\n\n\n7) :s/PROGRESS_CHECKPOINT_PHASE_FILE_SYNC/PROGRESS_CHECKPOINT_PHASE_PROCESS_FILE_SYNC_REQUESTS\n\nAnd :s/WHEN 11 THEN 'performing sync requests'/WHEN 11 THEN\n'processing file sync requests'\n\n8) :s/Finalizing/finalizing\n+ WHEN 14 THEN 'Finalizing'\n\n9) :s/checkpointing snapshots/checkpointing logical replication snapshot files\n+ WHEN 3 THEN 'checkpointing snapshots'\n:s/checkpointing logical rewrite mappings/checkpointing logical\nreplication rewrite mapping files\n+ WHEN 4 THEN 'checkpointing logical rewrite mappings'\n\n10) I'm not sure if it's discussed, how about adding the number of\nsnapshot/mapping files so far the checkpoint has processed in file\nprocessing while loops of\nCheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\nbe many logical snapshot or mapping files and users may be interested\nin knowing the so-far-processed-file-count.\n\n11) I think it's discussed, are we going to add the pid of the\ncheckpoint requestor?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 28 Feb 2022 10:21:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn Mon, Feb 28, 2022 at 10:21:23AM +0530, Bharath Rupireddy wrote:\n> \n> Another thought for my review comment:\n> > 1) Can't we use pg_is_in_recovery to determine if it's a restartpoint\n> > or checkpoint instead of having a new function\n> > pg_stat_get_progress_checkpoint_type?\n> \n> I don't think using pg_is_in_recovery work here as it is taken after\n> the checkpoint has started. So, I think the right way here is to send\n> 1 in CreateCheckPoint and 2 in CreateRestartPoint and use\n> CASE-WHEN-ELSE-END to show \"1\": \"checkpoint\" \"2\":\"restartpoint\".\n\nI suggested upthread to store the starting timeline instead. This way you can\ndeduce whether it's a restartpoint or a checkpoint, but you can also deduce\nother information, like what was the starting WAL.\n\n> 11) I think it's discussed, are we going to add the pid of the\n> checkpoint requestor?\n\nAs mentioned upthread, there can be multiple backends that request a\ncheckpoint, so unless we want to store an array of pid we should store a number\nof backend that are waiting for a new checkpoint.\n\n\n", "msg_date": "Mon, 28 Feb 2022 14:32:22 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Mon, Feb 28, 2022 at 12:02 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Feb 28, 2022 at 10:21:23AM +0530, Bharath Rupireddy wrote:\n> >\n> > Another thought for my review comment:\n> > > 1) Can't we use pg_is_in_recovery to determine if it's a restartpoint\n> > > or checkpoint instead of having a new function\n> > > pg_stat_get_progress_checkpoint_type?\n> >\n> > I don't think using pg_is_in_recovery work here as it is taken after\n> > the checkpoint has started. So, I think the right way here is to send\n> > 1 in CreateCheckPoint and 2 in CreateRestartPoint and use\n> > CASE-WHEN-ELSE-END to show \"1\": \"checkpoint\" \"2\":\"restartpoint\".\n>\n> I suggested upthread to store the starting timeline instead. This way you can\n> deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> other information, like what was the starting WAL.\n\nI don't understand why we need the timeline here to just determine\nwhether it's a restartpoint or checkpoint. I know that the\nInsertTimeLineID is 0 during recovery. IMO, emitting 1 for checkpoint\nand 2 for restartpoint in CreateCheckPoint and CreateRestartPoint\nrespectively and using CASE-WHEN-ELSE-END to show it in readable\nformat is the easiest way.\n\nCan't the checkpoint start LSN be deduced from\nPROGRESS_CHECKPOINT_LSN, checkPoint.redo?\n\nI'm completely against these pg_stat_get_progress_checkpoint_{type,\nkind, start_time} functions unless there's a strong case. IMO, we can\nachieve what we want without these functions as well.\n\n> > 11) I think it's discussed, are we going to add the pid of the\n> > checkpoint requestor?\n>\n> As mentioned upthread, there can be multiple backends that request a\n> checkpoint, so unless we want to store an array of pid we should store a number\n> of backend that are waiting for a new checkpoint.\n\nYeah, you are right. Let's not go that path and store an array of\npids. I don't see a strong use-case with the pid of the process\nrequesting checkpoint. If required, we can add it later once the\npg_stat_progress_checkpoint view gets in.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 28 Feb 2022 18:03:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Mon, Feb 28, 2022 at 06:03:54PM +0530, Bharath Rupireddy wrote:\n> On Mon, Feb 28, 2022 at 12:02 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I suggested upthread to store the starting timeline instead. This way you can\n> > deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> > other information, like what was the starting WAL.\n> \n> I don't understand why we need the timeline here to just determine\n> whether it's a restartpoint or checkpoint.\n\nI'm not saying it's necessary, I'm saying that for the same space usage we can\nstore something a bit more useful. If no one cares about having the starting\ntimeline available for no extra cost then sure, let's just store the kind\ndirectly.\n\n> Can't the checkpoint start LSN be deduced from\n> PROGRESS_CHECKPOINT_LSN, checkPoint.redo?\n\nI'm not sure I'm following, isn't checkPoint.redo the checkpoint start LSN?\n\n> > As mentioned upthread, there can be multiple backends that request a\n> > checkpoint, so unless we want to store an array of pid we should store a number\n> > of backend that are waiting for a new checkpoint.\n> \n> Yeah, you are right. Let's not go that path and store an array of\n> pids. I don't see a strong use-case with the pid of the process\n> requesting checkpoint. If required, we can add it later once the\n> pg_stat_progress_checkpoint view gets in.\n\nI don't think that's really necessary to give the pid list.\n\nIf you requested a new checkpoint, it doesn't matter if it's only your backend\nthat triggered it, another backend or a few other dozen, the result will be the\nsame and you have the information that the request has been seen. We could\nstore just a bool for that but having a number instead also gives a bit more\ninformation and may allow you to detect some broken logic on your client code\nif it keeps increasing.\n\n\n", "msg_date": "Mon, 28 Feb 2022 20:58:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 3) Why do we need this extra calculation for start_lsn?\n> Do you ever see a negative LSN or something here?\n> + ('0/0'::pg_lsn + (\n> + CASE\n> + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> + ELSE (0)::numeric\n> + END + (s.param3)::numeric)) AS start_lsn,\n\nYes: LSN can take up all of an uint64; whereas the pgstat column is a\nbigint type; thus the signed int64. This cast is OK as it wraps\naround, but that means we have to take care to correctly display the\nLSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\nthe special-casing for negative values.\n\nAs to whether it is reasonable: Generating 16GB of wal every second\n(2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\nhas been > 20GB/sec for a while); and that leaves you 2^29 seconds of\ndatabase runtime; or about 17 years. Seeing that a cluster can be\n`pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\nleast version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\nwe can assume that clusters hitting LSN=2^63 will be a reasonable\npossibility within the next few years. As the lifespan of a PG release\nis about 5 years, it doesn't seem impossible that there will be actual\nclusters that are going to hit this naturally in the lifespan of PG15.\n\nIt is also possible that someone fat-fingers pg_resetwal; and creates\na cluster with LSN >= 2^63; resulting in negative values in the\ns.param3 field. Not likely, but we can force such situations; and as\nsuch we should handle that gracefully.\n\n> 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> the reasoning for having this function and it's named as *checkpoint*\n> when it doesn't do anything specific to the checkpoint at all?\n\nI hadn't thought of using the types' inout functions, but it looks\nlike timestamp IO functions use a formatted timestring, which won't\nwork with the epoch-based timestamp stored in the view.\n\n> Having 3 unnecessary functions that aren't useful to the users at all\n> in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> let's try to do without extra functions.\n\nI agree that (1) could be simplified, or at least fully expressed in\nSQL without exposing too many internals. If we're fine with exposing\ninternals like flags and type layouts, then (2), and arguably (4), can\nbe expressed in SQL as well.\n\n-Matthias\n\n\n", "msg_date": "Mon, 28 Feb 2022 14:10:25 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > 3) Why do we need this extra calculation for start_lsn?\n> > Do you ever see a negative LSN or something here?\n> > + ('0/0'::pg_lsn + (\n> > + CASE\n> > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > + ELSE (0)::numeric\n> > + END + (s.param3)::numeric)) AS start_lsn,\n>\n> Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> bigint type; thus the signed int64. This cast is OK as it wraps\n> around, but that means we have to take care to correctly display the\n> LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> the special-casing for negative values.\n\nYes. The extra calculation is required here as we are storing unit64\nvalue in the variable of type int64. When we convert uint64 to int64\nthen the bit pattern is preserved (so no data is lost). The high-order\nbit becomes the sign bit and if the sign bit is set, both the sign and\nmagnitude of the value changes. To safely get the actual uint64 value\nwhatever was assigned, we need the above calculations.\n\n> > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > the reasoning for having this function and it's named as *checkpoint*\n> > when it doesn't do anything specific to the checkpoint at all?\n>\n> I hadn't thought of using the types' inout functions, but it looks\n> like timestamp IO functions use a formatted timestring, which won't\n> work with the epoch-based timestamp stored in the view.\n\nThere is a variation of to_timestamp() which takes UNIX epoch (float8)\nas an argument and converts it to timestamptz but we cannot directly\ncall this function with S.param4.\n\nTimestampTz\nGetCurrentTimestamp(void)\n{\n TimestampTz result;\n struct timeval tp;\n\n gettimeofday(&tp, NULL);\n\n result = (TimestampTz) tp.tv_sec -\n ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n result = (result * USECS_PER_SEC) + tp.tv_usec;\n\n return result;\n}\n\nS.param4 contains the output of the above function\n(GetCurrentTimestamp()) which returns Postgres epoch but the\nto_timestamp() expects UNIX epoch as input. So some calculation is\nrequired here. I feel the SQL 'to_timestamp(946684800 +\n(S.param4::float / 1000000)) AS start_time' works fine. The value\n'946684800' is equal to ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) *\nSECS_PER_DAY). I am not sure whether it is good practice to use this\nway. Kindly share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Feb 28, 2022 at 6:40 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > 3) Why do we need this extra calculation for start_lsn?\n> > Do you ever see a negative LSN or something here?\n> > + ('0/0'::pg_lsn + (\n> > + CASE\n> > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > + ELSE (0)::numeric\n> > + END + (s.param3)::numeric)) AS start_lsn,\n>\n> Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> bigint type; thus the signed int64. This cast is OK as it wraps\n> around, but that means we have to take care to correctly display the\n> LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> the special-casing for negative values.\n>\n> As to whether it is reasonable: Generating 16GB of wal every second\n> (2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\n> has been > 20GB/sec for a while); and that leaves you 2^29 seconds of\n> database runtime; or about 17 years. Seeing that a cluster can be\n> `pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\n> least version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\n> we can assume that clusters hitting LSN=2^63 will be a reasonable\n> possibility within the next few years. As the lifespan of a PG release\n> is about 5 years, it doesn't seem impossible that there will be actual\n> clusters that are going to hit this naturally in the lifespan of PG15.\n>\n> It is also possible that someone fat-fingers pg_resetwal; and creates\n> a cluster with LSN >= 2^63; resulting in negative values in the\n> s.param3 field. Not likely, but we can force such situations; and as\n> such we should handle that gracefully.\n>\n> > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > the reasoning for having this function and it's named as *checkpoint*\n> > when it doesn't do anything specific to the checkpoint at all?\n>\n> I hadn't thought of using the types' inout functions, but it looks\n> like timestamp IO functions use a formatted timestring, which won't\n> work with the epoch-based timestamp stored in the view.\n>\n> > Having 3 unnecessary functions that aren't useful to the users at all\n> > in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> > let's try to do without extra functions.\n>\n> I agree that (1) could be simplified, or at least fully expressed in\n> SQL without exposing too many internals. If we're fine with exposing\n> internals like flags and type layouts, then (2), and arguably (4), can\n> be expressed in SQL as well.\n>\n> -Matthias\n\n\n", "msg_date": "Tue, 1 Mar 2022 14:27:04 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Thanks for reviewing.\n\n> > > I suggested upthread to store the starting timeline instead. This way you can\n> > > deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> > > other information, like what was the starting WAL.\n> >\n> > I don't understand why we need the timeline here to just determine\n> > whether it's a restartpoint or checkpoint.\n>\n> I'm not saying it's necessary, I'm saying that for the same space usage we can\n> store something a bit more useful. If no one cares about having the starting\n> timeline available for no extra cost then sure, let's just store the kind\n> directly.\n\nFixed.\n\n> 2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\n> directly instead of new function pg_stat_get_progress_checkpoint_kind?\n> + snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> + (flags == 0) ? \"unknown\" : \"\",\n> + (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> + (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> + (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> + (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> + (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> + (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> + (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> + (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n\nFixed.\n---\n\n> 5) Do we need a special phase for this checkpoint operation? I'm not\n> sure in which cases it will take a long time, but it looks like\n> there's a wait loop here.\n> vxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n> if (nvxids > 0)\n> {\n> do\n> {\n> pg_usleep(10000L); /* wait for 10 msec */\n> } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n> }\n\nYes. It is better to add a separate phase here.\n---\n\n> Also, how about special phases for SyncPostCheckpoint(),\n> SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> it might be increase in future (?)), TruncateSUBTRANS()?\n\nSyncPreCheckpoint() is just incrementing a counter and\nPreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\nthere is no need to add any phases for these as of now. We can add in\nthe future if necessary. Added phases for SyncPostCheckpoint(),\nInvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n---\n\n> 6) SLRU (Simple LRU) isn't a phase here, you can just say\n> PROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES.\n> +\n> + pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> + PROGRESS_CHECKPOINT_PHASE_SLRU_PAGES);\n> CheckPointPredicate();\n>\n> And :s/checkpointing SLRU pages/checkpointing predicate lock pages\n>+ WHEN 9 THEN 'checkpointing SLRU pages'\n\nFixed.\n---\n\n> 7) :s/PROGRESS_CHECKPOINT_PHASE_FILE_SYNC/PROGRESS_CHECKPOINT_PHASE_PROCESS_FILE_SYNC_REQUESTS\n\nI feel PROGRESS_CHECKPOINT_PHASE_FILE_SYNC is a better option here as\nit describes the purpose in less words.\n\n> And :s/WHEN 11 THEN 'performing sync requests'/WHEN 11 THEN\n> 'processing file sync requests'\n\nFixed.\n---\n\n> 8) :s/Finalizing/finalizing\n> + WHEN 14 THEN 'Finalizing'\n\nFixed.\n---\n\n> 9) :s/checkpointing snapshots/checkpointing logical replication snapshot files\n> + WHEN 3 THEN 'checkpointing snapshots'\n> :s/checkpointing logical rewrite mappings/checkpointing logical\n> replication rewrite mapping files\n> + WHEN 4 THEN 'checkpointing logical rewrite mappings'\n\nFixed.\n---\n\n> 10) I'm not sure if it's discussed, how about adding the number of\n> snapshot/mapping files so far the checkpoint has processed in file\n> processing while loops of\n> CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> be many logical snapshot or mapping files and users may be interested\n> in knowing the so-far-processed-file-count.\n\nI had thought about this while sharing the v1 patch and mentioned my\nviews upthread. I feel it won't give meaningful progress information\n(It can be treated as statistics). Hence not included. Thoughts?\n\n> > > As mentioned upthread, there can be multiple backends that request a\n> > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > of backend that are waiting for a new checkpoint.\n> >\n> > Yeah, you are right. Let's not go that path and store an array of\n> > pids. I don't see a strong use-case with the pid of the process\n> > requesting checkpoint. If required, we can add it later once the\n> > pg_stat_progress_checkpoint view gets in.\n>\n> I don't think that's really necessary to give the pid list.\n>\n> If you requested a new checkpoint, it doesn't matter if it's only your backend\n> that triggered it, another backend or a few other dozen, the result will be the\n> same and you have the information that the request has been seen. We could\n> store just a bool for that but having a number instead also gives a bit more\n> information and may allow you to detect some broken logic on your client code\n> if it keeps increasing.\n\nIt's a good metric to show in the view but the information is not\nreadily available. Additional code is required to calculate the number\nof requests. Is it worth doing that? I feel this can be added later if\nrequired.\n\nPlease find the v4 patch attached and share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Mar 1, 2022 at 2:27 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > 3) Why do we need this extra calculation for start_lsn?\n> > > Do you ever see a negative LSN or something here?\n> > > + ('0/0'::pg_lsn + (\n> > > + CASE\n> > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > + ELSE (0)::numeric\n> > > + END + (s.param3)::numeric)) AS start_lsn,\n> >\n> > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > bigint type; thus the signed int64. This cast is OK as it wraps\n> > around, but that means we have to take care to correctly display the\n> > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > the special-casing for negative values.\n>\n> Yes. The extra calculation is required here as we are storing unit64\n> value in the variable of type int64. When we convert uint64 to int64\n> then the bit pattern is preserved (so no data is lost). The high-order\n> bit becomes the sign bit and if the sign bit is set, both the sign and\n> magnitude of the value changes. To safely get the actual uint64 value\n> whatever was assigned, we need the above calculations.\n>\n> > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > the reasoning for having this function and it's named as *checkpoint*\n> > > when it doesn't do anything specific to the checkpoint at all?\n> >\n> > I hadn't thought of using the types' inout functions, but it looks\n> > like timestamp IO functions use a formatted timestring, which won't\n> > work with the epoch-based timestamp stored in the view.\n>\n> There is a variation of to_timestamp() which takes UNIX epoch (float8)\n> as an argument and converts it to timestamptz but we cannot directly\n> call this function with S.param4.\n>\n> TimestampTz\n> GetCurrentTimestamp(void)\n> {\n> TimestampTz result;\n> struct timeval tp;\n>\n> gettimeofday(&tp, NULL);\n>\n> result = (TimestampTz) tp.tv_sec -\n> ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n> result = (result * USECS_PER_SEC) + tp.tv_usec;\n>\n> return result;\n> }\n>\n> S.param4 contains the output of the above function\n> (GetCurrentTimestamp()) which returns Postgres epoch but the\n> to_timestamp() expects UNIX epoch as input. So some calculation is\n> required here. I feel the SQL 'to_timestamp(946684800 +\n> (S.param4::float / 1000000)) AS start_time' works fine. The value\n> '946684800' is equal to ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) *\n> SECS_PER_DAY). I am not sure whether it is good practice to use this\n> way. Kindly share your thoughts.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Mon, Feb 28, 2022 at 6:40 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > 3) Why do we need this extra calculation for start_lsn?\n> > > Do you ever see a negative LSN or something here?\n> > > + ('0/0'::pg_lsn + (\n> > > + CASE\n> > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > + ELSE (0)::numeric\n> > > + END + (s.param3)::numeric)) AS start_lsn,\n> >\n> > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > bigint type; thus the signed int64. This cast is OK as it wraps\n> > around, but that means we have to take care to correctly display the\n> > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > the special-casing for negative values.\n> >\n> > As to whether it is reasonable: Generating 16GB of wal every second\n> > (2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\n> > has been > 20GB/sec for a while); and that leaves you 2^29 seconds of\n> > database runtime; or about 17 years. Seeing that a cluster can be\n> > `pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\n> > least version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\n> > we can assume that clusters hitting LSN=2^63 will be a reasonable\n> > possibility within the next few years. As the lifespan of a PG release\n> > is about 5 years, it doesn't seem impossible that there will be actual\n> > clusters that are going to hit this naturally in the lifespan of PG15.\n> >\n> > It is also possible that someone fat-fingers pg_resetwal; and creates\n> > a cluster with LSN >= 2^63; resulting in negative values in the\n> > s.param3 field. Not likely, but we can force such situations; and as\n> > such we should handle that gracefully.\n> >\n> > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > the reasoning for having this function and it's named as *checkpoint*\n> > > when it doesn't do anything specific to the checkpoint at all?\n> >\n> > I hadn't thought of using the types' inout functions, but it looks\n> > like timestamp IO functions use a formatted timestring, which won't\n> > work with the epoch-based timestamp stored in the view.\n> >\n> > > Having 3 unnecessary functions that aren't useful to the users at all\n> > > in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> > > let's try to do without extra functions.\n> >\n> > I agree that (1) could be simplified, or at least fully expressed in\n> > SQL without exposing too many internals. If we're fine with exposing\n> > internals like flags and type layouts, then (2), and arguably (4), can\n> > be expressed in SQL as well.\n> >\n> > -Matthias", "msg_date": "Wed, 2 Mar 2022 16:45:20 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > Also, how about special phases for SyncPostCheckpoint(),\n> > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > it might be increase in future (?)), TruncateSUBTRANS()?\n>\n> SyncPreCheckpoint() is just incrementing a counter and\n> PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> there is no need to add any phases for these as of now. We can add in\n> the future if necessary. Added phases for SyncPostCheckpoint(),\n> InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n\nOkay.\n\n> > 10) I'm not sure if it's discussed, how about adding the number of\n> > snapshot/mapping files so far the checkpoint has processed in file\n> > processing while loops of\n> > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > be many logical snapshot or mapping files and users may be interested\n> > in knowing the so-far-processed-file-count.\n>\n> I had thought about this while sharing the v1 patch and mentioned my\n> views upthread. I feel it won't give meaningful progress information\n> (It can be treated as statistics). Hence not included. Thoughts?\n\nOkay. If there are any complaints about it we can always add them later.\n\n> > > > As mentioned upthread, there can be multiple backends that request a\n> > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > of backend that are waiting for a new checkpoint.\n> > >\n> > > Yeah, you are right. Let's not go that path and store an array of\n> > > pids. I don't see a strong use-case with the pid of the process\n> > > requesting checkpoint. If required, we can add it later once the\n> > > pg_stat_progress_checkpoint view gets in.\n> >\n> > I don't think that's really necessary to give the pid list.\n> >\n> > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > that triggered it, another backend or a few other dozen, the result will be the\n> > same and you have the information that the request has been seen. We could\n> > store just a bool for that but having a number instead also gives a bit more\n> > information and may allow you to detect some broken logic on your client code\n> > if it keeps increasing.\n>\n> It's a good metric to show in the view but the information is not\n> readily available. Additional code is required to calculate the number\n> of requests. Is it worth doing that? I feel this can be added later if\n> required.\n\nYes, we can always add it later if required.\n\n> Please find the v4 patch attached and share your thoughts.\n\nI reviewed v4 patch, here are my comments:\n\n1) Can we convert below into pgstat_progress_update_multi_param, just\nto avoid function calls?\npgstat_progress_update_param(PROGRESS_CHECKPOINT_LSN, checkPoint.redo);\npgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n\n2) Why are we not having special phase for CheckPointReplicationOrigin\nas it does good bunch of work (writing to disk, XLogFlush,\ndurable_rename) especially when max_replication_slots is large?\n\n3) I don't think \"requested\" is necessary here as it doesn't add any\nvalue or it's not a checkpoint kind or such, you can remove it.\n\n4) s:/'recycling old XLOG files'/'recycling old WAL files'\n+ WHEN 16 THEN 'recycling old XLOG files'\n\n5) Can we place CREATE VIEW pg_stat_progress_checkpoint AS definition\nnext to pg_stat_progress_copy in system_view.sql? It looks like all\nthe progress reporting views are next to each other.\n\n6) How about shutdown and end-of-recovery checkpoint? Are you planning\nto have an ereport_startup_progress mechanism as 0002?\n\n7) I think you don't need to call checkpoint_progress_start and\npgstat_progress_update_param, any other progress reporting function\nfor shutdown and end-of-recovery checkpoint right?\n\n8) Not for all kinds of checkpoints right? pg_stat_progress_checkpoint\ncan't show progress report for shutdown and end-of-recovery\ncheckpoint, I think you need to specify that here in wal.sgml and\ncheckpoint.sgml.\n+ command <command>CHECKPOINT</command>. The checkpointer process running the\n+ checkpoint will report its progress in the\n+ <structname>pg_stat_progress_checkpoint</structname> view. See\n+ <xref linkend=\"checkpoint-progress-reporting\"/> for details.\n\n9) Can you add a test case for pg_stat_progress_checkpoint view? I\nthink it's good to add one. See, below for reference:\n-- Add a trigger to catch and print the contents of the catalog view\n-- pg_stat_progress_copy during data insertion. This allows to test\n-- the validation of some progress reports for COPY FROM where the trigger\n-- would fire.\ncreate function notice_after_tab_progress_reporting() returns trigger AS\n$$\ndeclare report record;\n\n10) Typo: it's not \"is happens\"\n+ The checkpoint is happens without delays.\n\n11) Can you be specific what are those \"some operations\" that forced a\ncheckpoint? May be like, basebackup, createdb or something?\n+ The checkpoint is started because some operation forced a checkpoint.\n\n12) Can you be a bit elobartive here who waits? Something like the\nbackend that requested checkpoint will wait until it's completion ....\n+ Wait for completion before returning.\n\n13) \"removing unneeded or flushing needed logical rewrite mapping files\"\n+ The checkpointer process is currently removing/flushing the logical\n\n14) \"old WAL files\"\n+ The checkpointer process is currently recycling old XLOG files.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Mar 2022 23:52:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Here are some of my review comments on the latest patch:\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>type</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Type of checkpoint. See <xref linkend=\"checkpoint-types\"/>.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>kind</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Kind of checkpoint. See <xref linkend=\"checkpoint-kinds\"/>.\n+ </para></entry>\n+ </row>\n\nThis looks a bit confusing. Two columns, one with the name \"checkpoint\ntypes\" and another \"checkpoint kinds\". You can probably rename\ncheckpoint-kinds to checkpoint-flags and let the checkpoint-types be\nas-it-is.\n\n==\n\n+ <entry><structname>pg_stat_progress_checkpoint</structname><indexterm><primary>pg_stat_progress_checkpoint</primary></indexterm></entry>\n+ <entry>One row only, showing the progress of the checkpoint.\n\nLet's make this message consistent with the already existing message\nfor pg_stat_wal_receiver. See description for pg_stat_wal_receiver\nview in \"Dynamic Statistics Views\" table.\n\n==\n\n[local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n-[ RECORD 1 ]-----+-------------------------------------\npid | 22043\ntype | checkpoint\nkind | immediate force wait requested time\n\nI think the output in the kind column can be displayed as {immediate,\nforce, wait, requested, time}. By the way these are all checkpoint\nflags so it is better to display it as checkpoint flags instead of\ncheckpoint kind as mentioned in one of my previous comments.\n\n==\n\n[local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n-[ RECORD 1 ]-----+-------------------------------------\npid | 22043\ntype | checkpoint\nkind | immediate force wait requested time\nstart_lsn | 0/14C60F8\nstart_time | 2022-03-03 18:59:56.018662+05:30\nphase | performing two phase checkpoint\n\n\nThis is the output I see when the checkpointer process has come out of\nthe two phase checkpoint and is currently writing checkpoint xlog\nrecords and doing other stuff like updating control files etc. Is this\nokay?\n\n==\n\nThe output of log_checkpoint shows the number of buffers written is 3\nwhereas the output of pg_stat_progress_checkpoint shows it as 0. See\nbelow:\n\n2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\nlongest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n\n--\n\n[local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n-[ RECORD 1 ]-----+-------------------------------------\npid | 22043\ntype | checkpoint\nkind | immediate force wait requested time\nstart_lsn | 0/14C60F8\nstart_time | 2022-03-03 18:59:56.018662+05:30\nphase | finalizing\nbuffers_total | 0\nbuffers_processed | 0\nbuffers_written | 0\n\nAny idea why this mismatch?\n\n==\n\nI think we can add a couple of more information to this view -\nstart_time for buffer write operation and start_time for buffer sync\noperation. These are two very time consuming tasks in a checkpoint and\npeople would find it useful to know how much time is being taken by\nthe checkpoint in I/O operation phase. thoughts?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Thanks for reviewing.\n>\n> > > > I suggested upthread to store the starting timeline instead. This way you can\n> > > > deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> > > > other information, like what was the starting WAL.\n> > >\n> > > I don't understand why we need the timeline here to just determine\n> > > whether it's a restartpoint or checkpoint.\n> >\n> > I'm not saying it's necessary, I'm saying that for the same space usage we can\n> > store something a bit more useful. If no one cares about having the starting\n> > timeline available for no extra cost then sure, let's just store the kind\n> > directly.\n>\n> Fixed.\n>\n> > 2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\n> > directly instead of new function pg_stat_get_progress_checkpoint_kind?\n> > + snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> > + (flags == 0) ? \"unknown\" : \"\",\n> > + (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> > + (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> > + (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> > + (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> > + (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> > + (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> > + (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> > + (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n>\n> Fixed.\n> ---\n>\n> > 5) Do we need a special phase for this checkpoint operation? I'm not\n> > sure in which cases it will take a long time, but it looks like\n> > there's a wait loop here.\n> > vxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n> > if (nvxids > 0)\n> > {\n> > do\n> > {\n> > pg_usleep(10000L); /* wait for 10 msec */\n> > } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n> > }\n>\n> Yes. It is better to add a separate phase here.\n> ---\n>\n> > Also, how about special phases for SyncPostCheckpoint(),\n> > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > it might be increase in future (?)), TruncateSUBTRANS()?\n>\n> SyncPreCheckpoint() is just incrementing a counter and\n> PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> there is no need to add any phases for these as of now. We can add in\n> the future if necessary. Added phases for SyncPostCheckpoint(),\n> InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n> ---\n>\n> > 6) SLRU (Simple LRU) isn't a phase here, you can just say\n> > PROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES.\n> > +\n> > + pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> > + PROGRESS_CHECKPOINT_PHASE_SLRU_PAGES);\n> > CheckPointPredicate();\n> >\n> > And :s/checkpointing SLRU pages/checkpointing predicate lock pages\n> >+ WHEN 9 THEN 'checkpointing SLRU pages'\n>\n> Fixed.\n> ---\n>\n> > 7) :s/PROGRESS_CHECKPOINT_PHASE_FILE_SYNC/PROGRESS_CHECKPOINT_PHASE_PROCESS_FILE_SYNC_REQUESTS\n>\n> I feel PROGRESS_CHECKPOINT_PHASE_FILE_SYNC is a better option here as\n> it describes the purpose in less words.\n>\n> > And :s/WHEN 11 THEN 'performing sync requests'/WHEN 11 THEN\n> > 'processing file sync requests'\n>\n> Fixed.\n> ---\n>\n> > 8) :s/Finalizing/finalizing\n> > + WHEN 14 THEN 'Finalizing'\n>\n> Fixed.\n> ---\n>\n> > 9) :s/checkpointing snapshots/checkpointing logical replication snapshot files\n> > + WHEN 3 THEN 'checkpointing snapshots'\n> > :s/checkpointing logical rewrite mappings/checkpointing logical\n> > replication rewrite mapping files\n> > + WHEN 4 THEN 'checkpointing logical rewrite mappings'\n>\n> Fixed.\n> ---\n>\n> > 10) I'm not sure if it's discussed, how about adding the number of\n> > snapshot/mapping files so far the checkpoint has processed in file\n> > processing while loops of\n> > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > be many logical snapshot or mapping files and users may be interested\n> > in knowing the so-far-processed-file-count.\n>\n> I had thought about this while sharing the v1 patch and mentioned my\n> views upthread. I feel it won't give meaningful progress information\n> (It can be treated as statistics). Hence not included. Thoughts?\n>\n> > > > As mentioned upthread, there can be multiple backends that request a\n> > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > of backend that are waiting for a new checkpoint.\n> > >\n> > > Yeah, you are right. Let's not go that path and store an array of\n> > > pids. I don't see a strong use-case with the pid of the process\n> > > requesting checkpoint. If required, we can add it later once the\n> > > pg_stat_progress_checkpoint view gets in.\n> >\n> > I don't think that's really necessary to give the pid list.\n> >\n> > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > that triggered it, another backend or a few other dozen, the result will be the\n> > same and you have the information that the request has been seen. We could\n> > store just a bool for that but having a number instead also gives a bit more\n> > information and may allow you to detect some broken logic on your client code\n> > if it keeps increasing.\n>\n> It's a good metric to show in the view but the information is not\n> readily available. Additional code is required to calculate the number\n> of requests. Is it worth doing that? I feel this can be added later if\n> required.\n>\n> Please find the v4 patch attached and share your thoughts.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Tue, Mar 1, 2022 at 2:27 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > Do you ever see a negative LSN or something here?\n> > > > + ('0/0'::pg_lsn + (\n> > > > + CASE\n> > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > + ELSE (0)::numeric\n> > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > >\n> > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > around, but that means we have to take care to correctly display the\n> > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > the special-casing for negative values.\n> >\n> > Yes. The extra calculation is required here as we are storing unit64\n> > value in the variable of type int64. When we convert uint64 to int64\n> > then the bit pattern is preserved (so no data is lost). The high-order\n> > bit becomes the sign bit and if the sign bit is set, both the sign and\n> > magnitude of the value changes. To safely get the actual uint64 value\n> > whatever was assigned, we need the above calculations.\n> >\n> > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > when it doesn't do anything specific to the checkpoint at all?\n> > >\n> > > I hadn't thought of using the types' inout functions, but it looks\n> > > like timestamp IO functions use a formatted timestring, which won't\n> > > work with the epoch-based timestamp stored in the view.\n> >\n> > There is a variation of to_timestamp() which takes UNIX epoch (float8)\n> > as an argument and converts it to timestamptz but we cannot directly\n> > call this function with S.param4.\n> >\n> > TimestampTz\n> > GetCurrentTimestamp(void)\n> > {\n> > TimestampTz result;\n> > struct timeval tp;\n> >\n> > gettimeofday(&tp, NULL);\n> >\n> > result = (TimestampTz) tp.tv_sec -\n> > ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n> > result = (result * USECS_PER_SEC) + tp.tv_usec;\n> >\n> > return result;\n> > }\n> >\n> > S.param4 contains the output of the above function\n> > (GetCurrentTimestamp()) which returns Postgres epoch but the\n> > to_timestamp() expects UNIX epoch as input. So some calculation is\n> > required here. I feel the SQL 'to_timestamp(946684800 +\n> > (S.param4::float / 1000000)) AS start_time' works fine. The value\n> > '946684800' is equal to ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) *\n> > SECS_PER_DAY). I am not sure whether it is good practice to use this\n> > way. Kindly share your thoughts.\n> >\n> > Thanks & Regards,\n> > Nitin Jadhav\n> >\n> > On Mon, Feb 28, 2022 at 6:40 PM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > >\n> > > On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > Do you ever see a negative LSN or something here?\n> > > > + ('0/0'::pg_lsn + (\n> > > > + CASE\n> > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > + ELSE (0)::numeric\n> > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > >\n> > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > around, but that means we have to take care to correctly display the\n> > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > the special-casing for negative values.\n> > >\n> > > As to whether it is reasonable: Generating 16GB of wal every second\n> > > (2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\n> > > has been > 20GB/sec for a while); and that leaves you 2^29 seconds of\n> > > database runtime; or about 17 years. Seeing that a cluster can be\n> > > `pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\n> > > least version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\n> > > we can assume that clusters hitting LSN=2^63 will be a reasonable\n> > > possibility within the next few years. As the lifespan of a PG release\n> > > is about 5 years, it doesn't seem impossible that there will be actual\n> > > clusters that are going to hit this naturally in the lifespan of PG15.\n> > >\n> > > It is also possible that someone fat-fingers pg_resetwal; and creates\n> > > a cluster with LSN >= 2^63; resulting in negative values in the\n> > > s.param3 field. Not likely, but we can force such situations; and as\n> > > such we should handle that gracefully.\n> > >\n> > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > when it doesn't do anything specific to the checkpoint at all?\n> > >\n> > > I hadn't thought of using the types' inout functions, but it looks\n> > > like timestamp IO functions use a formatted timestring, which won't\n> > > work with the epoch-based timestamp stored in the view.\n> > >\n> > > > Having 3 unnecessary functions that aren't useful to the users at all\n> > > > in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> > > > let's try to do without extra functions.\n> > >\n> > > I agree that (1) could be simplified, or at least fully expressed in\n> > > SQL without exposing too many internals. If we're fine with exposing\n> > > internals like flags and type layouts, then (2), and arguably (4), can\n> > > be expressed in SQL as well.\n> > >\n> > > -Matthias\n\n\n", "msg_date": "Thu, 3 Mar 2022 20:30:10 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, Mar 2, 2022 at 7:15 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > > As mentioned upthread, there can be multiple backends that request a\n> > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > of backend that are waiting for a new checkpoint.\n>\n> It's a good metric to show in the view but the information is not\n> readily available. Additional code is required to calculate the number\n> of requests. Is it worth doing that? I feel this can be added later if\n> required.\n\nIs it that hard or costly to do? Just sending a message to increment\nthe stat counter in RequestCheckpoint() would be enough.\n\nAlso, unless I'm missing something it's still only showing the initial\ncheckpoint flags, so it's *not* showing what the checkpoint is really\ndoing, only what the checkpoint may be doing if nothing else happens.\nIt just feels wrong. You could even use that ckpt_flags info to know\nthat at least one backend has requested a new checkpoint, if you don't\nwant to have a number of backends.\n\n\n", "msg_date": "Fri, 4 Mar 2022 02:28:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Thanks for reviewing.\n\n> 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> to have an ereport_startup_progress mechanism as 0002?\n\nI thought of including it earlier then I felt lets first make the\ncurrent patch stable. Once all the fields are properly decided and the\npatch gets in then we can easily extend the functionality to shutdown\nand end-of-recovery cases. I have also observed that the timer\nfunctionality wont work properly in case of shutdown as we are doing\nan immediate checkpoint. So this needs a lot of discussion and I would\nlike to handle this on a separate thread.\n---\n\n> 7) I think you don't need to call checkpoint_progress_start and\n> pgstat_progress_update_param, any other progress reporting function\n> for shutdown and end-of-recovery checkpoint right?\n\nI had included the guards earlier and then removed later based on the\ndiscussion upthread.\n---\n\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | performing two phase checkpoint\n>\n>\n> This is the output I see when the checkpointer process has come out of\n> the two phase checkpoint and is currently writing checkpoint xlog\n> records and doing other stuff like updating control files etc. Is this\n> okay?\n\nThe idea behind choosing the phases is based on the functionality\nwhich takes longer time to execute. Since after two phase checkpoint\ntill post checkpoint cleanup won't take much time to execute, I have\nnot added any additional phase for that. But I also agree that this\ngives wrong information to the user. How about mentioning the phase\ninformation at the end of each phase like \"Initializing\",\n\"Initialization done\", ..., \"two phase checkpoint done\", \"post\ncheckpoint cleanup done\", .., \"finalizing\". Except for the first phase\n(\"initializing\") and last phase (\"finalizing\"), all the other phases\ndescribe the end of a certain operation. I feel this gives correct\ninformation even though the phase name/description does not represent\nthe entire code block between two phases. For example if the current\nphase is ''two phase checkpoint done\". Then the user can infer that\nthe checkpointer has done till two phase checkpoint and it is doing\nother stuff that are after that. Thoughts?\n\n> The output of log_checkpoint shows the number of buffers written is 3\n> whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> below:\n>\n> 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n>\n> --\n>\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | finalizing\n> buffers_total | 0\n> buffers_processed | 0\n> buffers_written | 0\n>\n> Any idea why this mismatch?\n\nGood catch. In BufferSync() we have 'num_to_scan' (buffers_total)\nwhich indicates the total number of buffers to be processed. Based on\nthat, the 'buffers_processed' and 'buffers_written' counter gets\nincremented. I meant these values may reach upto 'buffers_total'. The\ncurrent pg_stat_progress_view support above information. There is\nanother place when 'ckpt_bufs_written' gets incremented (In\nSlruInternalWritePage()). This increment is above the 'buffers_total'\nvalue and it is included in the server log message (checkpoint end)\nand not included in the view. I am a bit confused here. If we include\nthis increment in the view then we cannot calculate the exact\n'buffers_total' beforehand. Can we increment the 'buffers_toal' also\nwhen 'ckpt_bufs_written' gets incremented so that we can match the\nbehaviour with checkpoint end message? Please share your thoughts.\n---\n\n> I think we can add a couple of more information to this view -\n> start_time for buffer write operation and start_time for buffer sync\n> operation. These are two very time consuming tasks in a checkpoint and\n> people would find it useful to know how much time is being taken by\n> the checkpoint in I/O operation phase. thoughts?\n\nI felt the detailed progress is getting shown for these 2 phases of\nthe checkpoint like 'buffers_processed', 'buffers_written' and\n'files_synced'. Hence I did not think about adding start time and If\nit is really required, then I can add.\n\n> Is it that hard or costly to do? Just sending a message to increment\n> the stat counter in RequestCheckpoint() would be enough.\n>\n> Also, unless I'm missing something it's still only showing the initial\n> checkpoint flags, so it's *not* showing what the checkpoint is really\n> doing, only what the checkpoint may be doing if nothing else happens.\n> It just feels wrong. You could even use that ckpt_flags info to know\n> that at least one backend has requested a new checkpoint, if you don't\n> want to have a number of backends.\n\nI think using ckpt_flags to display whether any new requests have been\nmade or not is a good idea. I will include it in the next patch.\n\nThanks & Regards,\nNitin Jadhav\nOn Thu, Mar 3, 2022 at 11:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 7:15 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > of backend that are waiting for a new checkpoint.\n> >\n> > It's a good metric to show in the view but the information is not\n> > readily available. Additional code is required to calculate the number\n> > of requests. Is it worth doing that? I feel this can be added later if\n> > required.\n>\n> Is it that hard or costly to do? Just sending a message to increment\n> the stat counter in RequestCheckpoint() would be enough.\n>\n> Also, unless I'm missing something it's still only showing the initial\n> checkpoint flags, so it's *not* showing what the checkpoint is really\n> doing, only what the checkpoint may be doing if nothing else happens.\n> It just feels wrong. You could even use that ckpt_flags info to know\n> that at least one backend has requested a new checkpoint, if you don't\n> want to have a number of backends.\n\n\n", "msg_date": "Fri, 4 Mar 2022 16:59:04 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Please don't mix comments from multiple reviewers into one thread.\nIt's hard to understand which comments are mine or Julien's or from\nothers. Can you please respond to the email from each of us separately\nwith an inline response. That will be helpful to understand your\nthoughts on our review comments.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Mar 4, 2022 at 4:59 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Thanks for reviewing.\n>\n> > 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> > to have an ereport_startup_progress mechanism as 0002?\n>\n> I thought of including it earlier then I felt lets first make the\n> current patch stable. Once all the fields are properly decided and the\n> patch gets in then we can easily extend the functionality to shutdown\n> and end-of-recovery cases. I have also observed that the timer\n> functionality wont work properly in case of shutdown as we are doing\n> an immediate checkpoint. So this needs a lot of discussion and I would\n> like to handle this on a separate thread.\n> ---\n>\n> > 7) I think you don't need to call checkpoint_progress_start and\n> > pgstat_progress_update_param, any other progress reporting function\n> > for shutdown and end-of-recovery checkpoint right?\n>\n> I had included the guards earlier and then removed later based on the\n> discussion upthread.\n> ---\n>\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | performing two phase checkpoint\n> >\n> >\n> > This is the output I see when the checkpointer process has come out of\n> > the two phase checkpoint and is currently writing checkpoint xlog\n> > records and doing other stuff like updating control files etc. Is this\n> > okay?\n>\n> The idea behind choosing the phases is based on the functionality\n> which takes longer time to execute. Since after two phase checkpoint\n> till post checkpoint cleanup won't take much time to execute, I have\n> not added any additional phase for that. But I also agree that this\n> gives wrong information to the user. How about mentioning the phase\n> information at the end of each phase like \"Initializing\",\n> \"Initialization done\", ..., \"two phase checkpoint done\", \"post\n> checkpoint cleanup done\", .., \"finalizing\". Except for the first phase\n> (\"initializing\") and last phase (\"finalizing\"), all the other phases\n> describe the end of a certain operation. I feel this gives correct\n> information even though the phase name/description does not represent\n> the entire code block between two phases. For example if the current\n> phase is ''two phase checkpoint done\". Then the user can infer that\n> the checkpointer has done till two phase checkpoint and it is doing\n> other stuff that are after that. Thoughts?\n>\n> > The output of log_checkpoint shows the number of buffers written is 3\n> > whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> > below:\n> >\n> > 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> > buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> > write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> > longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n> >\n> > --\n> >\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | finalizing\n> > buffers_total | 0\n> > buffers_processed | 0\n> > buffers_written | 0\n> >\n> > Any idea why this mismatch?\n>\n> Good catch. In BufferSync() we have 'num_to_scan' (buffers_total)\n> which indicates the total number of buffers to be processed. Based on\n> that, the 'buffers_processed' and 'buffers_written' counter gets\n> incremented. I meant these values may reach upto 'buffers_total'. The\n> current pg_stat_progress_view support above information. There is\n> another place when 'ckpt_bufs_written' gets incremented (In\n> SlruInternalWritePage()). This increment is above the 'buffers_total'\n> value and it is included in the server log message (checkpoint end)\n> and not included in the view. I am a bit confused here. If we include\n> this increment in the view then we cannot calculate the exact\n> 'buffers_total' beforehand. Can we increment the 'buffers_toal' also\n> when 'ckpt_bufs_written' gets incremented so that we can match the\n> behaviour with checkpoint end message? Please share your thoughts.\n> ---\n>\n> > I think we can add a couple of more information to this view -\n> > start_time for buffer write operation and start_time for buffer sync\n> > operation. These are two very time consuming tasks in a checkpoint and\n> > people would find it useful to know how much time is being taken by\n> > the checkpoint in I/O operation phase. thoughts?\n>\n> I felt the detailed progress is getting shown for these 2 phases of\n> the checkpoint like 'buffers_processed', 'buffers_written' and\n> 'files_synced'. Hence I did not think about adding start time and If\n> it is really required, then I can add.\n>\n> > Is it that hard or costly to do? Just sending a message to increment\n> > the stat counter in RequestCheckpoint() would be enough.\n> >\n> > Also, unless I'm missing something it's still only showing the initial\n> > checkpoint flags, so it's *not* showing what the checkpoint is really\n> > doing, only what the checkpoint may be doing if nothing else happens.\n> > It just feels wrong. You could even use that ckpt_flags info to know\n> > that at least one backend has requested a new checkpoint, if you don't\n> > want to have a number of backends.\n>\n> I think using ckpt_flags to display whether any new requests have been\n> made or not is a good idea. I will include it in the next patch.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n> On Thu, Mar 3, 2022 at 11:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Wed, Mar 2, 2022 at 7:15 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > > of backend that are waiting for a new checkpoint.\n> > >\n> > > It's a good metric to show in the view but the information is not\n> > > readily available. Additional code is required to calculate the number\n> > > of requests. Is it worth doing that? I feel this can be added later if\n> > > required.\n> >\n> > Is it that hard or costly to do? Just sending a message to increment\n> > the stat counter in RequestCheckpoint() would be enough.\n> >\n> > Also, unless I'm missing something it's still only showing the initial\n> > checkpoint flags, so it's *not* showing what the checkpoint is really\n> > doing, only what the checkpoint may be doing if nothing else happens.\n> > It just feels wrong. You could even use that ckpt_flags info to know\n> > that at least one backend has requested a new checkpoint, if you don't\n> > want to have a number of backends.\n\n\n", "msg_date": "Fri, 4 Mar 2022 17:50:47 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> 1) Can we convert below into pgstat_progress_update_multi_param, just\n> to avoid function calls?\n> pgstat_progress_update_param(PROGRESS_CHECKPOINT_LSN, checkPoint.redo);\n> pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n>\n> 2) Why are we not having special phase for CheckPointReplicationOrigin\n> as it does good bunch of work (writing to disk, XLogFlush,\n> durable_rename) especially when max_replication_slots is large?\n>\n> 3) I don't think \"requested\" is necessary here as it doesn't add any\n> value or it's not a checkpoint kind or such, you can remove it.\n>\n> 4) s:/'recycling old XLOG files'/'recycling old WAL files'\n> + WHEN 16 THEN 'recycling old XLOG files'\n>\n> 5) Can we place CREATE VIEW pg_stat_progress_checkpoint AS definition\n> next to pg_stat_progress_copy in system_view.sql? It looks like all\n> the progress reporting views are next to each other.\n\nI will take care in the next patch.\n---\n\n> 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> to have an ereport_startup_progress mechanism as 0002?\n\nI thought of including it earlier then I felt lets first make the\ncurrent patch stable. Once all the fields are properly decided and the\npatch gets in then we can easily extend the functionality to shutdown\nand end-of-recovery cases. I have also observed that the timer\nfunctionality wont work properly in case of shutdown as we are doing\nan immediate checkpoint. So this needs a lot of discussion and I would\nlike to handle this on a separate thread.\n---\n\n> 7) I think you don't need to call checkpoint_progress_start and\n> pgstat_progress_update_param, any other progress reporting function\n> for shutdown and end-of-recovery checkpoint right?\n\nI had included the guards earlier and then removed later based on the\ndiscussion upthread.\n---\n\n> 8) Not for all kinds of checkpoints right? pg_stat_progress_checkpoint\n> can't show progress report for shutdown and end-of-recovery\n> checkpoint, I think you need to specify that here in wal.sgml and\n> checkpoint.sgml.\n> + command <command>CHECKPOINT</command>. The checkpointer process running the\n> + checkpoint will report its progress in the\n> + <structname>pg_stat_progress_checkpoint</structname> view. See\n> + <xref linkend=\"checkpoint-progress-reporting\"/> for details.\n>\n> 9) Can you add a test case for pg_stat_progress_checkpoint view? I\n> think it's good to add one. See, below for reference:\n> -- Add a trigger to catch and print the contents of the catalog view\n> -- pg_stat_progress_copy during data insertion. This allows to test\n> -- the validation of some progress reports for COPY FROM where the trigger\n> -- would fire.\n> create function notice_after_tab_progress_reporting() returns trigger AS\n> $$\n> declare report record;\n>\n> 10) Typo: it's not \"is happens\"\n> + The checkpoint is happens without delays.\n>\n> 11) Can you be specific what are those \"some operations\" that forced a\n> checkpoint? May be like, basebackup, createdb or something?\n> + The checkpoint is started because some operation forced a checkpoint.\n>\n> 12) Can you be a bit elobartive here who waits? Something like the\n> backend that requested checkpoint will wait until it's completion ....\n> + Wait for completion before returning.\n>\n> 13) \"removing unneeded or flushing needed logical rewrite mapping files\"\n> + The checkpointer process is currently removing/flushing the logical\n>\n> 14) \"old WAL files\"\n> + The checkpointer process is currently recycling old XLOG files.\n\nI will take care in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Mar 2, 2022 at 11:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > > Also, how about special phases for SyncPostCheckpoint(),\n> > > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > > it might be increase in future (?)), TruncateSUBTRANS()?\n> >\n> > SyncPreCheckpoint() is just incrementing a counter and\n> > PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> > there is no need to add any phases for these as of now. We can add in\n> > the future if necessary. Added phases for SyncPostCheckpoint(),\n> > InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n>\n> Okay.\n>\n> > > 10) I'm not sure if it's discussed, how about adding the number of\n> > > snapshot/mapping files so far the checkpoint has processed in file\n> > > processing while loops of\n> > > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > > be many logical snapshot or mapping files and users may be interested\n> > > in knowing the so-far-processed-file-count.\n> >\n> > I had thought about this while sharing the v1 patch and mentioned my\n> > views upthread. I feel it won't give meaningful progress information\n> > (It can be treated as statistics). Hence not included. Thoughts?\n>\n> Okay. If there are any complaints about it we can always add them later.\n>\n> > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > of backend that are waiting for a new checkpoint.\n> > > >\n> > > > Yeah, you are right. Let's not go that path and store an array of\n> > > > pids. I don't see a strong use-case with the pid of the process\n> > > > requesting checkpoint. If required, we can add it later once the\n> > > > pg_stat_progress_checkpoint view gets in.\n> > >\n> > > I don't think that's really necessary to give the pid list.\n> > >\n> > > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > > that triggered it, another backend or a few other dozen, the result will be the\n> > > same and you have the information that the request has been seen. We could\n> > > store just a bool for that but having a number instead also gives a bit more\n> > > information and may allow you to detect some broken logic on your client code\n> > > if it keeps increasing.\n> >\n> > It's a good metric to show in the view but the information is not\n> > readily available. Additional code is required to calculate the number\n> > of requests. Is it worth doing that? I feel this can be added later if\n> > required.\n>\n> Yes, we can always add it later if required.\n>\n> > Please find the v4 patch attached and share your thoughts.\n>\n> I reviewed v4 patch, here are my comments:\n>\n> 1) Can we convert below into pgstat_progress_update_multi_param, just\n> to avoid function calls?\n> pgstat_progress_update_param(PROGRESS_CHECKPOINT_LSN, checkPoint.redo);\n> pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n>\n> 2) Why are we not having special phase for CheckPointReplicationOrigin\n> as it does good bunch of work (writing to disk, XLogFlush,\n> durable_rename) especially when max_replication_slots is large?\n>\n> 3) I don't think \"requested\" is necessary here as it doesn't add any\n> value or it's not a checkpoint kind or such, you can remove it.\n>\n> 4) s:/'recycling old XLOG files'/'recycling old WAL files'\n> + WHEN 16 THEN 'recycling old XLOG files'\n>\n> 5) Can we place CREATE VIEW pg_stat_progress_checkpoint AS definition\n> next to pg_stat_progress_copy in system_view.sql? It looks like all\n> the progress reporting views are next to each other.\n>\n> 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> to have an ereport_startup_progress mechanism as 0002?\n>\n> 7) I think you don't need to call checkpoint_progress_start and\n> pgstat_progress_update_param, any other progress reporting function\n> for shutdown and end-of-recovery checkpoint right?\n>\n> 8) Not for all kinds of checkpoints right? pg_stat_progress_checkpoint\n> can't show progress report for shutdown and end-of-recovery\n> checkpoint, I think you need to specify that here in wal.sgml and\n> checkpoint.sgml.\n> + command <command>CHECKPOINT</command>. The checkpointer process running the\n> + checkpoint will report its progress in the\n> + <structname>pg_stat_progress_checkpoint</structname> view. See\n> + <xref linkend=\"checkpoint-progress-reporting\"/> for details.\n>\n> 9) Can you add a test case for pg_stat_progress_checkpoint view? I\n> think it's good to add one. See, below for reference:\n> -- Add a trigger to catch and print the contents of the catalog view\n> -- pg_stat_progress_copy during data insertion. This allows to test\n> -- the validation of some progress reports for COPY FROM where the trigger\n> -- would fire.\n> create function notice_after_tab_progress_reporting() returns trigger AS\n> $$\n> declare report record;\n>\n> 10) Typo: it's not \"is happens\"\n> + The checkpoint is happens without delays.\n>\n> 11) Can you be specific what are those \"some operations\" that forced a\n> checkpoint? May be like, basebackup, createdb or something?\n> + The checkpoint is started because some operation forced a checkpoint.\n>\n> 12) Can you be a bit elobartive here who waits? Something like the\n> backend that requested checkpoint will wait until it's completion ....\n> + Wait for completion before returning.\n>\n> 13) \"removing unneeded or flushing needed logical rewrite mapping files\"\n> + The checkpointer process is currently removing/flushing the logical\n>\n> 14) \"old WAL files\"\n> + The checkpointer process is currently recycling old XLOG files.\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Mon, 7 Mar 2022 19:45:50 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>type</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Type of checkpoint. See <xref linkend=\"checkpoint-types\"/>.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>kind</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Kind of checkpoint. See <xref linkend=\"checkpoint-kinds\"/>.\n> + </para></entry>\n> + </row>\n>\n> This looks a bit confusing. Two columns, one with the name \"checkpoint\n> types\" and another \"checkpoint kinds\". You can probably rename\n> checkpoint-kinds to checkpoint-flags and let the checkpoint-types be\n> as-it-is.\n\nMakes sense. I will change in the next patch.\n---\n\n> + <entry><structname>pg_stat_progress_checkpoint</structname><indexterm><primary>pg_stat_progress_checkpoint</primary></indexterm></entry>\n> + <entry>One row only, showing the progress of the checkpoint.\n>\n> Let's make this message consistent with the already existing message\n> for pg_stat_wal_receiver. See description for pg_stat_wal_receiver\n> view in \"Dynamic Statistics Views\" table.\n\nYou want me to change \"One row only\" to \"Only one row\" ? If that is\nthe case then for other views in the \"Collected Statistics Views\"\ntable, it is referred as \"One row only\". Let me know if you are\npointing out something else.\n---\n\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n>\n> I think the output in the kind column can be displayed as {immediate,\n> force, wait, requested, time}. By the way these are all checkpoint\n> flags so it is better to display it as checkpoint flags instead of\n> checkpoint kind as mentioned in one of my previous comments.\n\nI will update in the next patch.\n---\n\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | performing two phase checkpoint\n>\n> This is the output I see when the checkpointer process has come out of\n> the two phase checkpoint and is currently writing checkpoint xlog\n> records and doing other stuff like updating control files etc. Is this\n> okay?\n\nThe idea behind choosing the phases is based on the functionality\nwhich takes longer time to execute. Since after two phase checkpoint\ntill post checkpoint cleanup won't take much time to execute, I have\nnot added any additional phase for that. But I also agree that this\ngives wrong information to the user. How about mentioning the phase\ninformation at the end of each phase like \"Initializing\",\n\"Initialization done\", ..., \"two phase checkpoint done\", \"post\ncheckpoint cleanup done\", .., \"finalizing\". Except for the first phase\n(\"initializing\") and last phase (\"finalizing\"), all the other phases\ndescribe the end of a certain operation. I feel this gives correct\ninformation even though the phase name/description does not represent\nthe entire code block between two phases. For example if the current\nphase is ''two phase checkpoint done\". Then the user can infer that\nthe checkpointer has done till two phase checkpoint and it is doing\nother stuff that are after that. Thoughts?\n---\n\n> The output of log_checkpoint shows the number of buffers written is 3\n> whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> below:\n>\n> 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n>\n> --\n>\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | finalizing\n> buffers_total | 0\n> buffers_processed | 0\n> buffers_written | 0\n>\n> Any idea why this mismatch?\n\nGood catch. In BufferSync() we have 'num_to_scan' (buffers_total)\nwhich indicates the total number of buffers to be processed. Based on\nthat, the 'buffers_processed' and 'buffers_written' counter gets\nincremented. I meant these values may reach upto 'buffers_total'. The\ncurrent pg_stat_progress_view support above information. There is\nanother place when 'ckpt_bufs_written' gets incremented (In\nSlruInternalWritePage()). This increment is above the 'buffers_total'\nvalue and it is included in the server log message (checkpoint end)\nand not included in the view. I am a bit confused here. If we include\nthis increment in the view then we cannot calculate the exact\n'buffers_total' beforehand. Can we increment the 'buffers_toal' also\nwhen 'ckpt_bufs_written' gets incremented so that we can match the\nbehaviour with checkpoint end message? Please share your thoughts.\n---\n\n> I think we can add a couple of more information to this view -\n> start_time for buffer write operation and start_time for buffer sync\n> operation. These are two very time consuming tasks in a checkpoint and\n> people would find it useful to know how much time is being taken by\n> the checkpoint in I/O operation phase. thoughts?\n\nThe detailed progress is getting shown for these 2 phases of the\ncheckpoint like 'buffers_processed', 'buffers_written' and\n'files_synced'. Hence I did not think about adding start time and If\nit is really required, then I can add.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Mar 3, 2022 at 8:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Here are some of my review comments on the latest patch:\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>type</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Type of checkpoint. See <xref linkend=\"checkpoint-types\"/>.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>kind</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Kind of checkpoint. See <xref linkend=\"checkpoint-kinds\"/>.\n> + </para></entry>\n> + </row>\n>\n> This looks a bit confusing. Two columns, one with the name \"checkpoint\n> types\" and another \"checkpoint kinds\". You can probably rename\n> checkpoint-kinds to checkpoint-flags and let the checkpoint-types be\n> as-it-is.\n>\n> ==\n>\n> + <entry><structname>pg_stat_progress_checkpoint</structname><indexterm><primary>pg_stat_progress_checkpoint</primary></indexterm></entry>\n> + <entry>One row only, showing the progress of the checkpoint.\n>\n> Let's make this message consistent with the already existing message\n> for pg_stat_wal_receiver. See description for pg_stat_wal_receiver\n> view in \"Dynamic Statistics Views\" table.\n>\n> ==\n>\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n>\n> I think the output in the kind column can be displayed as {immediate,\n> force, wait, requested, time}. By the way these are all checkpoint\n> flags so it is better to display it as checkpoint flags instead of\n> checkpoint kind as mentioned in one of my previous comments.\n>\n> ==\n>\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | performing two phase checkpoint\n>\n>\n> This is the output I see when the checkpointer process has come out of\n> the two phase checkpoint and is currently writing checkpoint xlog\n> records and doing other stuff like updating control files etc. Is this\n> okay?\n>\n> ==\n>\n> The output of log_checkpoint shows the number of buffers written is 3\n> whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> below:\n>\n> 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n>\n> --\n>\n> [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> -[ RECORD 1 ]-----+-------------------------------------\n> pid | 22043\n> type | checkpoint\n> kind | immediate force wait requested time\n> start_lsn | 0/14C60F8\n> start_time | 2022-03-03 18:59:56.018662+05:30\n> phase | finalizing\n> buffers_total | 0\n> buffers_processed | 0\n> buffers_written | 0\n>\n> Any idea why this mismatch?\n>\n> ==\n>\n> I think we can add a couple of more information to this view -\n> start_time for buffer write operation and start_time for buffer sync\n> operation. These are two very time consuming tasks in a checkpoint and\n> people would find it useful to know how much time is being taken by\n> the checkpoint in I/O operation phase. thoughts?\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n> On Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Thanks for reviewing.\n> >\n> > > > > I suggested upthread to store the starting timeline instead. This way you can\n> > > > > deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> > > > > other information, like what was the starting WAL.\n> > > >\n> > > > I don't understand why we need the timeline here to just determine\n> > > > whether it's a restartpoint or checkpoint.\n> > >\n> > > I'm not saying it's necessary, I'm saying that for the same space usage we can\n> > > store something a bit more useful. If no one cares about having the starting\n> > > timeline available for no extra cost then sure, let's just store the kind\n> > > directly.\n> >\n> > Fixed.\n> >\n> > > 2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\n> > > directly instead of new function pg_stat_get_progress_checkpoint_kind?\n> > > + snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> > > + (flags == 0) ? \"unknown\" : \"\",\n> > > + (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> > > + (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> > > + (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> > > + (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> > > + (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> > > + (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> > > + (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> > > + (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n> >\n> > Fixed.\n> > ---\n> >\n> > > 5) Do we need a special phase for this checkpoint operation? I'm not\n> > > sure in which cases it will take a long time, but it looks like\n> > > there's a wait loop here.\n> > > vxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n> > > if (nvxids > 0)\n> > > {\n> > > do\n> > > {\n> > > pg_usleep(10000L); /* wait for 10 msec */\n> > > } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n> > > }\n> >\n> > Yes. It is better to add a separate phase here.\n> > ---\n> >\n> > > Also, how about special phases for SyncPostCheckpoint(),\n> > > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > > it might be increase in future (?)), TruncateSUBTRANS()?\n> >\n> > SyncPreCheckpoint() is just incrementing a counter and\n> > PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> > there is no need to add any phases for these as of now. We can add in\n> > the future if necessary. Added phases for SyncPostCheckpoint(),\n> > InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n> > ---\n> >\n> > > 6) SLRU (Simple LRU) isn't a phase here, you can just say\n> > > PROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES.\n> > > +\n> > > + pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> > > + PROGRESS_CHECKPOINT_PHASE_SLRU_PAGES);\n> > > CheckPointPredicate();\n> > >\n> > > And :s/checkpointing SLRU pages/checkpointing predicate lock pages\n> > >+ WHEN 9 THEN 'checkpointing SLRU pages'\n> >\n> > Fixed.\n> > ---\n> >\n> > > 7) :s/PROGRESS_CHECKPOINT_PHASE_FILE_SYNC/PROGRESS_CHECKPOINT_PHASE_PROCESS_FILE_SYNC_REQUESTS\n> >\n> > I feel PROGRESS_CHECKPOINT_PHASE_FILE_SYNC is a better option here as\n> > it describes the purpose in less words.\n> >\n> > > And :s/WHEN 11 THEN 'performing sync requests'/WHEN 11 THEN\n> > > 'processing file sync requests'\n> >\n> > Fixed.\n> > ---\n> >\n> > > 8) :s/Finalizing/finalizing\n> > > + WHEN 14 THEN 'Finalizing'\n> >\n> > Fixed.\n> > ---\n> >\n> > > 9) :s/checkpointing snapshots/checkpointing logical replication snapshot files\n> > > + WHEN 3 THEN 'checkpointing snapshots'\n> > > :s/checkpointing logical rewrite mappings/checkpointing logical\n> > > replication rewrite mapping files\n> > > + WHEN 4 THEN 'checkpointing logical rewrite mappings'\n> >\n> > Fixed.\n> > ---\n> >\n> > > 10) I'm not sure if it's discussed, how about adding the number of\n> > > snapshot/mapping files so far the checkpoint has processed in file\n> > > processing while loops of\n> > > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > > be many logical snapshot or mapping files and users may be interested\n> > > in knowing the so-far-processed-file-count.\n> >\n> > I had thought about this while sharing the v1 patch and mentioned my\n> > views upthread. I feel it won't give meaningful progress information\n> > (It can be treated as statistics). Hence not included. Thoughts?\n> >\n> > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > of backend that are waiting for a new checkpoint.\n> > > >\n> > > > Yeah, you are right. Let's not go that path and store an array of\n> > > > pids. I don't see a strong use-case with the pid of the process\n> > > > requesting checkpoint. If required, we can add it later once the\n> > > > pg_stat_progress_checkpoint view gets in.\n> > >\n> > > I don't think that's really necessary to give the pid list.\n> > >\n> > > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > > that triggered it, another backend or a few other dozen, the result will be the\n> > > same and you have the information that the request has been seen. We could\n> > > store just a bool for that but having a number instead also gives a bit more\n> > > information and may allow you to detect some broken logic on your client code\n> > > if it keeps increasing.\n> >\n> > It's a good metric to show in the view but the information is not\n> > readily available. Additional code is required to calculate the number\n> > of requests. Is it worth doing that? I feel this can be added later if\n> > required.\n> >\n> > Please find the v4 patch attached and share your thoughts.\n> >\n> > Thanks & Regards,\n> > Nitin Jadhav\n> >\n> > On Tue, Mar 1, 2022 at 2:27 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > > Do you ever see a negative LSN or something here?\n> > > > > + ('0/0'::pg_lsn + (\n> > > > > + CASE\n> > > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > > + ELSE (0)::numeric\n> > > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > > >\n> > > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > > around, but that means we have to take care to correctly display the\n> > > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > > the special-casing for negative values.\n> > >\n> > > Yes. The extra calculation is required here as we are storing unit64\n> > > value in the variable of type int64. When we convert uint64 to int64\n> > > then the bit pattern is preserved (so no data is lost). The high-order\n> > > bit becomes the sign bit and if the sign bit is set, both the sign and\n> > > magnitude of the value changes. To safely get the actual uint64 value\n> > > whatever was assigned, we need the above calculations.\n> > >\n> > > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > > when it doesn't do anything specific to the checkpoint at all?\n> > > >\n> > > > I hadn't thought of using the types' inout functions, but it looks\n> > > > like timestamp IO functions use a formatted timestring, which won't\n> > > > work with the epoch-based timestamp stored in the view.\n> > >\n> > > There is a variation of to_timestamp() which takes UNIX epoch (float8)\n> > > as an argument and converts it to timestamptz but we cannot directly\n> > > call this function with S.param4.\n> > >\n> > > TimestampTz\n> > > GetCurrentTimestamp(void)\n> > > {\n> > > TimestampTz result;\n> > > struct timeval tp;\n> > >\n> > > gettimeofday(&tp, NULL);\n> > >\n> > > result = (TimestampTz) tp.tv_sec -\n> > > ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n> > > result = (result * USECS_PER_SEC) + tp.tv_usec;\n> > >\n> > > return result;\n> > > }\n> > >\n> > > S.param4 contains the output of the above function\n> > > (GetCurrentTimestamp()) which returns Postgres epoch but the\n> > > to_timestamp() expects UNIX epoch as input. So some calculation is\n> > > required here. I feel the SQL 'to_timestamp(946684800 +\n> > > (S.param4::float / 1000000)) AS start_time' works fine. The value\n> > > '946684800' is equal to ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) *\n> > > SECS_PER_DAY). I am not sure whether it is good practice to use this\n> > > way. Kindly share your thoughts.\n> > >\n> > > Thanks & Regards,\n> > > Nitin Jadhav\n> > >\n> > > On Mon, Feb 28, 2022 at 6:40 PM Matthias van de Meent\n> > > <boekewurm+postgres@gmail.com> wrote:\n> > > >\n> > > > On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > > Do you ever see a negative LSN or something here?\n> > > > > + ('0/0'::pg_lsn + (\n> > > > > + CASE\n> > > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > > + ELSE (0)::numeric\n> > > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > > >\n> > > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > > around, but that means we have to take care to correctly display the\n> > > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > > the special-casing for negative values.\n> > > >\n> > > > As to whether it is reasonable: Generating 16GB of wal every second\n> > > > (2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\n> > > > has been > 20GB/sec for a while); and that leaves you 2^29 seconds of\n> > > > database runtime; or about 17 years. Seeing that a cluster can be\n> > > > `pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\n> > > > least version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\n> > > > we can assume that clusters hitting LSN=2^63 will be a reasonable\n> > > > possibility within the next few years. As the lifespan of a PG release\n> > > > is about 5 years, it doesn't seem impossible that there will be actual\n> > > > clusters that are going to hit this naturally in the lifespan of PG15.\n> > > >\n> > > > It is also possible that someone fat-fingers pg_resetwal; and creates\n> > > > a cluster with LSN >= 2^63; resulting in negative values in the\n> > > > s.param3 field. Not likely, but we can force such situations; and as\n> > > > such we should handle that gracefully.\n> > > >\n> > > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > > when it doesn't do anything specific to the checkpoint at all?\n> > > >\n> > > > I hadn't thought of using the types' inout functions, but it looks\n> > > > like timestamp IO functions use a formatted timestring, which won't\n> > > > work with the epoch-based timestamp stored in the view.\n> > > >\n> > > > > Having 3 unnecessary functions that aren't useful to the users at all\n> > > > > in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> > > > > let's try to do without extra functions.\n> > > >\n> > > > I agree that (1) could be simplified, or at least fully expressed in\n> > > > SQL without exposing too many internals. If we're fine with exposing\n> > > > internals like flags and type layouts, then (2), and arguably (4), can\n> > > > be expressed in SQL as well.\n> > > >\n> > > > -Matthias\n\n\n", "msg_date": "Mon, 7 Mar 2022 20:15:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > 11) Can you be specific what are those \"some operations\" that forced a\n> > checkpoint? May be like, basebackup, createdb or something?\n> > + The checkpoint is started because some operation forced a checkpoint.\n> >\n> I will take care in the next patch.\n\nI feel mentioning/listing the specific operation makes it difficult to\nmaintain the document. If we add any new functionality in future which\nneeds a force checkpoint, then there is a high chance that we will\nmiss to update here. Hence modified it to \"The checkpoint is started\nbecause some operation (for which the checkpoint is necessary) is\nforced the checkpoint\".\n\nFixed other comments as per the discussion above.\nPlease find the v5 patch attached and share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Mar 7, 2022 at 7:45 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > 1) Can we convert below into pgstat_progress_update_multi_param, just\n> > to avoid function calls?\n> > pgstat_progress_update_param(PROGRESS_CHECKPOINT_LSN, checkPoint.redo);\n> > pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> >\n> > 2) Why are we not having special phase for CheckPointReplicationOrigin\n> > as it does good bunch of work (writing to disk, XLogFlush,\n> > durable_rename) especially when max_replication_slots is large?\n> >\n> > 3) I don't think \"requested\" is necessary here as it doesn't add any\n> > value or it's not a checkpoint kind or such, you can remove it.\n> >\n> > 4) s:/'recycling old XLOG files'/'recycling old WAL files'\n> > + WHEN 16 THEN 'recycling old XLOG files'\n> >\n> > 5) Can we place CREATE VIEW pg_stat_progress_checkpoint AS definition\n> > next to pg_stat_progress_copy in system_view.sql? It looks like all\n> > the progress reporting views are next to each other.\n>\n> I will take care in the next patch.\n> ---\n>\n> > 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> > to have an ereport_startup_progress mechanism as 0002?\n>\n> I thought of including it earlier then I felt lets first make the\n> current patch stable. Once all the fields are properly decided and the\n> patch gets in then we can easily extend the functionality to shutdown\n> and end-of-recovery cases. I have also observed that the timer\n> functionality wont work properly in case of shutdown as we are doing\n> an immediate checkpoint. So this needs a lot of discussion and I would\n> like to handle this on a separate thread.\n> ---\n>\n> > 7) I think you don't need to call checkpoint_progress_start and\n> > pgstat_progress_update_param, any other progress reporting function\n> > for shutdown and end-of-recovery checkpoint right?\n>\n> I had included the guards earlier and then removed later based on the\n> discussion upthread.\n> ---\n>\n> > 8) Not for all kinds of checkpoints right? pg_stat_progress_checkpoint\n> > can't show progress report for shutdown and end-of-recovery\n> > checkpoint, I think you need to specify that here in wal.sgml and\n> > checkpoint.sgml.\n> > + command <command>CHECKPOINT</command>. The checkpointer process running the\n> > + checkpoint will report its progress in the\n> > + <structname>pg_stat_progress_checkpoint</structname> view. See\n> > + <xref linkend=\"checkpoint-progress-reporting\"/> for details.\n> >\n> > 9) Can you add a test case for pg_stat_progress_checkpoint view? I\n> > think it's good to add one. See, below for reference:\n> > -- Add a trigger to catch and print the contents of the catalog view\n> > -- pg_stat_progress_copy during data insertion. This allows to test\n> > -- the validation of some progress reports for COPY FROM where the trigger\n> > -- would fire.\n> > create function notice_after_tab_progress_reporting() returns trigger AS\n> > $$\n> > declare report record;\n> >\n> > 10) Typo: it's not \"is happens\"\n> > + The checkpoint is happens without delays.\n> >\n> > 11) Can you be specific what are those \"some operations\" that forced a\n> > checkpoint? May be like, basebackup, createdb or something?\n> > + The checkpoint is started because some operation forced a checkpoint.\n> >\n> > 12) Can you be a bit elobartive here who waits? Something like the\n> > backend that requested checkpoint will wait until it's completion ....\n> > + Wait for completion before returning.\n> >\n> > 13) \"removing unneeded or flushing needed logical rewrite mapping files\"\n> > + The checkpointer process is currently removing/flushing the logical\n> >\n> > 14) \"old WAL files\"\n> > + The checkpointer process is currently recycling old XLOG files.\n>\n> I will take care in the next patch.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Wed, Mar 2, 2022 at 11:52 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > > > Also, how about special phases for SyncPostCheckpoint(),\n> > > > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > > > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > > > it might be increase in future (?)), TruncateSUBTRANS()?\n> > >\n> > > SyncPreCheckpoint() is just incrementing a counter and\n> > > PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> > > there is no need to add any phases for these as of now. We can add in\n> > > the future if necessary. Added phases for SyncPostCheckpoint(),\n> > > InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n> >\n> > Okay.\n> >\n> > > > 10) I'm not sure if it's discussed, how about adding the number of\n> > > > snapshot/mapping files so far the checkpoint has processed in file\n> > > > processing while loops of\n> > > > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > > > be many logical snapshot or mapping files and users may be interested\n> > > > in knowing the so-far-processed-file-count.\n> > >\n> > > I had thought about this while sharing the v1 patch and mentioned my\n> > > views upthread. I feel it won't give meaningful progress information\n> > > (It can be treated as statistics). Hence not included. Thoughts?\n> >\n> > Okay. If there are any complaints about it we can always add them later.\n> >\n> > > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > > of backend that are waiting for a new checkpoint.\n> > > > >\n> > > > > Yeah, you are right. Let's not go that path and store an array of\n> > > > > pids. I don't see a strong use-case with the pid of the process\n> > > > > requesting checkpoint. If required, we can add it later once the\n> > > > > pg_stat_progress_checkpoint view gets in.\n> > > >\n> > > > I don't think that's really necessary to give the pid list.\n> > > >\n> > > > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > > > that triggered it, another backend or a few other dozen, the result will be the\n> > > > same and you have the information that the request has been seen. We could\n> > > > store just a bool for that but having a number instead also gives a bit more\n> > > > information and may allow you to detect some broken logic on your client code\n> > > > if it keeps increasing.\n> > >\n> > > It's a good metric to show in the view but the information is not\n> > > readily available. Additional code is required to calculate the number\n> > > of requests. Is it worth doing that? I feel this can be added later if\n> > > required.\n> >\n> > Yes, we can always add it later if required.\n> >\n> > > Please find the v4 patch attached and share your thoughts.\n> >\n> > I reviewed v4 patch, here are my comments:\n> >\n> > 1) Can we convert below into pgstat_progress_update_multi_param, just\n> > to avoid function calls?\n> > pgstat_progress_update_param(PROGRESS_CHECKPOINT_LSN, checkPoint.redo);\n> > pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> >\n> > 2) Why are we not having special phase for CheckPointReplicationOrigin\n> > as it does good bunch of work (writing to disk, XLogFlush,\n> > durable_rename) especially when max_replication_slots is large?\n> >\n> > 3) I don't think \"requested\" is necessary here as it doesn't add any\n> > value or it's not a checkpoint kind or such, you can remove it.\n> >\n> > 4) s:/'recycling old XLOG files'/'recycling old WAL files'\n> > + WHEN 16 THEN 'recycling old XLOG files'\n> >\n> > 5) Can we place CREATE VIEW pg_stat_progress_checkpoint AS definition\n> > next to pg_stat_progress_copy in system_view.sql? It looks like all\n> > the progress reporting views are next to each other.\n> >\n> > 6) How about shutdown and end-of-recovery checkpoint? Are you planning\n> > to have an ereport_startup_progress mechanism as 0002?\n> >\n> > 7) I think you don't need to call checkpoint_progress_start and\n> > pgstat_progress_update_param, any other progress reporting function\n> > for shutdown and end-of-recovery checkpoint right?\n> >\n> > 8) Not for all kinds of checkpoints right? pg_stat_progress_checkpoint\n> > can't show progress report for shutdown and end-of-recovery\n> > checkpoint, I think you need to specify that here in wal.sgml and\n> > checkpoint.sgml.\n> > + command <command>CHECKPOINT</command>. The checkpointer process running the\n> > + checkpoint will report its progress in the\n> > + <structname>pg_stat_progress_checkpoint</structname> view. See\n> > + <xref linkend=\"checkpoint-progress-reporting\"/> for details.\n> >\n> > 9) Can you add a test case for pg_stat_progress_checkpoint view? I\n> > think it's good to add one. See, below for reference:\n> > -- Add a trigger to catch and print the contents of the catalog view\n> > -- pg_stat_progress_copy during data insertion. This allows to test\n> > -- the validation of some progress reports for COPY FROM where the trigger\n> > -- would fire.\n> > create function notice_after_tab_progress_reporting() returns trigger AS\n> > $$\n> > declare report record;\n> >\n> > 10) Typo: it's not \"is happens\"\n> > + The checkpoint is happens without delays.\n> >\n> > 11) Can you be specific what are those \"some operations\" that forced a\n> > checkpoint? May be like, basebackup, createdb or something?\n> > + The checkpoint is started because some operation forced a checkpoint.\n> >\n> > 12) Can you be a bit elobartive here who waits? Something like the\n> > backend that requested checkpoint will wait until it's completion ....\n> > + Wait for completion before returning.\n> >\n> > 13) \"removing unneeded or flushing needed logical rewrite mapping files\"\n> > + The checkpointer process is currently removing/flushing the logical\n> >\n> > 14) \"old WAL files\"\n> > + The checkpointer process is currently recycling old XLOG files.\n> >\n> > Regards,\n> > Bharath Rupireddy.", "msg_date": "Tue, 8 Mar 2022 20:25:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> >\n> > I think the output in the kind column can be displayed as {immediate,\n> > force, wait, requested, time}. By the way these are all checkpoint\n> > flags so it is better to display it as checkpoint flags instead of\n> > checkpoint kind as mentioned in one of my previous comments.\n>\n> I will update in the next patch.\n\nThe current format matches with the server log message for the\ncheckpoint start in LogCheckpointStart(). Just to be consistent, I\nhave not changed the code.\n\nI have taken care of the rest of the comments in v5 patch for which\nthere was clarity.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Mar 7, 2022 at 8:15 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>type</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + Type of checkpoint. See <xref linkend=\"checkpoint-types\"/>.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>kind</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + Kind of checkpoint. See <xref linkend=\"checkpoint-kinds\"/>.\n> > + </para></entry>\n> > + </row>\n> >\n> > This looks a bit confusing. Two columns, one with the name \"checkpoint\n> > types\" and another \"checkpoint kinds\". You can probably rename\n> > checkpoint-kinds to checkpoint-flags and let the checkpoint-types be\n> > as-it-is.\n>\n> Makes sense. I will change in the next patch.\n> ---\n>\n> > + <entry><structname>pg_stat_progress_checkpoint</structname><indexterm><primary>pg_stat_progress_checkpoint</primary></indexterm></entry>\n> > + <entry>One row only, showing the progress of the checkpoint.\n> >\n> > Let's make this message consistent with the already existing message\n> > for pg_stat_wal_receiver. See description for pg_stat_wal_receiver\n> > view in \"Dynamic Statistics Views\" table.\n>\n> You want me to change \"One row only\" to \"Only one row\" ? If that is\n> the case then for other views in the \"Collected Statistics Views\"\n> table, it is referred as \"One row only\". Let me know if you are\n> pointing out something else.\n> ---\n>\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> >\n> > I think the output in the kind column can be displayed as {immediate,\n> > force, wait, requested, time}. By the way these are all checkpoint\n> > flags so it is better to display it as checkpoint flags instead of\n> > checkpoint kind as mentioned in one of my previous comments.\n>\n> I will update in the next patch.\n> ---\n>\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | performing two phase checkpoint\n> >\n> > This is the output I see when the checkpointer process has come out of\n> > the two phase checkpoint and is currently writing checkpoint xlog\n> > records and doing other stuff like updating control files etc. Is this\n> > okay?\n>\n> The idea behind choosing the phases is based on the functionality\n> which takes longer time to execute. Since after two phase checkpoint\n> till post checkpoint cleanup won't take much time to execute, I have\n> not added any additional phase for that. But I also agree that this\n> gives wrong information to the user. How about mentioning the phase\n> information at the end of each phase like \"Initializing\",\n> \"Initialization done\", ..., \"two phase checkpoint done\", \"post\n> checkpoint cleanup done\", .., \"finalizing\". Except for the first phase\n> (\"initializing\") and last phase (\"finalizing\"), all the other phases\n> describe the end of a certain operation. I feel this gives correct\n> information even though the phase name/description does not represent\n> the entire code block between two phases. For example if the current\n> phase is ''two phase checkpoint done\". Then the user can infer that\n> the checkpointer has done till two phase checkpoint and it is doing\n> other stuff that are after that. Thoughts?\n> ---\n>\n> > The output of log_checkpoint shows the number of buffers written is 3\n> > whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> > below:\n> >\n> > 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> > buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> > write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> > longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n> >\n> > --\n> >\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | finalizing\n> > buffers_total | 0\n> > buffers_processed | 0\n> > buffers_written | 0\n> >\n> > Any idea why this mismatch?\n>\n> Good catch. In BufferSync() we have 'num_to_scan' (buffers_total)\n> which indicates the total number of buffers to be processed. Based on\n> that, the 'buffers_processed' and 'buffers_written' counter gets\n> incremented. I meant these values may reach upto 'buffers_total'. The\n> current pg_stat_progress_view support above information. There is\n> another place when 'ckpt_bufs_written' gets incremented (In\n> SlruInternalWritePage()). This increment is above the 'buffers_total'\n> value and it is included in the server log message (checkpoint end)\n> and not included in the view. I am a bit confused here. If we include\n> this increment in the view then we cannot calculate the exact\n> 'buffers_total' beforehand. Can we increment the 'buffers_toal' also\n> when 'ckpt_bufs_written' gets incremented so that we can match the\n> behaviour with checkpoint end message? Please share your thoughts.\n> ---\n>\n> > I think we can add a couple of more information to this view -\n> > start_time for buffer write operation and start_time for buffer sync\n> > operation. These are two very time consuming tasks in a checkpoint and\n> > people would find it useful to know how much time is being taken by\n> > the checkpoint in I/O operation phase. thoughts?\n>\n> The detailed progress is getting shown for these 2 phases of the\n> checkpoint like 'buffers_processed', 'buffers_written' and\n> 'files_synced'. Hence I did not think about adding start time and If\n> it is really required, then I can add.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Thu, Mar 3, 2022 at 8:30 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Here are some of my review comments on the latest patch:\n> >\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>type</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + Type of checkpoint. See <xref linkend=\"checkpoint-types\"/>.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>kind</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + Kind of checkpoint. See <xref linkend=\"checkpoint-kinds\"/>.\n> > + </para></entry>\n> > + </row>\n> >\n> > This looks a bit confusing. Two columns, one with the name \"checkpoint\n> > types\" and another \"checkpoint kinds\". You can probably rename\n> > checkpoint-kinds to checkpoint-flags and let the checkpoint-types be\n> > as-it-is.\n> >\n> > ==\n> >\n> > + <entry><structname>pg_stat_progress_checkpoint</structname><indexterm><primary>pg_stat_progress_checkpoint</primary></indexterm></entry>\n> > + <entry>One row only, showing the progress of the checkpoint.\n> >\n> > Let's make this message consistent with the already existing message\n> > for pg_stat_wal_receiver. See description for pg_stat_wal_receiver\n> > view in \"Dynamic Statistics Views\" table.\n> >\n> > ==\n> >\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> >\n> > I think the output in the kind column can be displayed as {immediate,\n> > force, wait, requested, time}. By the way these are all checkpoint\n> > flags so it is better to display it as checkpoint flags instead of\n> > checkpoint kind as mentioned in one of my previous comments.\n> >\n> > ==\n> >\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | performing two phase checkpoint\n> >\n> >\n> > This is the output I see when the checkpointer process has come out of\n> > the two phase checkpoint and is currently writing checkpoint xlog\n> > records and doing other stuff like updating control files etc. Is this\n> > okay?\n> >\n> > ==\n> >\n> > The output of log_checkpoint shows the number of buffers written is 3\n> > whereas the output of pg_stat_progress_checkpoint shows it as 0. See\n> > below:\n> >\n> > 2022-03-03 20:04:45.643 IST [22043] LOG: checkpoint complete: wrote 3\n> > buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> > write=24.652 s, sync=104.256 s, total=3889.625 s; sync files=2,\n> > longest=0.011 s, average=0.008 s; distance=0 kB, estimate=0 kB\n> >\n> > --\n> >\n> > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > -[ RECORD 1 ]-----+-------------------------------------\n> > pid | 22043\n> > type | checkpoint\n> > kind | immediate force wait requested time\n> > start_lsn | 0/14C60F8\n> > start_time | 2022-03-03 18:59:56.018662+05:30\n> > phase | finalizing\n> > buffers_total | 0\n> > buffers_processed | 0\n> > buffers_written | 0\n> >\n> > Any idea why this mismatch?\n> >\n> > ==\n> >\n> > I think we can add a couple of more information to this view -\n> > start_time for buffer write operation and start_time for buffer sync\n> > operation. These are two very time consuming tasks in a checkpoint and\n> > people would find it useful to know how much time is being taken by\n> > the checkpoint in I/O operation phase. thoughts?\n> >\n> > --\n> > With Regards,\n> > Ashutosh Sharma.\n> >\n> > On Wed, Mar 2, 2022 at 4:45 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > Thanks for reviewing.\n> > >\n> > > > > > I suggested upthread to store the starting timeline instead. This way you can\n> > > > > > deduce whether it's a restartpoint or a checkpoint, but you can also deduce\n> > > > > > other information, like what was the starting WAL.\n> > > > >\n> > > > > I don't understand why we need the timeline here to just determine\n> > > > > whether it's a restartpoint or checkpoint.\n> > > >\n> > > > I'm not saying it's necessary, I'm saying that for the same space usage we can\n> > > > store something a bit more useful. If no one cares about having the starting\n> > > > timeline available for no extra cost then sure, let's just store the kind\n> > > > directly.\n> > >\n> > > Fixed.\n> > >\n> > > > 2) Can't we just have these checks inside CASE-WHEN-THEN-ELSE blocks\n> > > > directly instead of new function pg_stat_get_progress_checkpoint_kind?\n> > > > + snprintf(ckpt_kind, MAXPGPATH, \"%s%s%s%s%s%s%s%s%s\",\n> > > > + (flags == 0) ? \"unknown\" : \"\",\n> > > > + (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> > > > + (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> > > > + (flags & CHECKPOINT_IMMEDIATE) ? \"immediate \" : \"\",\n> > > > + (flags & CHECKPOINT_FORCE) ? \"force \" : \"\",\n> > > > + (flags & CHECKPOINT_WAIT) ? \"wait \" : \"\",\n> > > > + (flags & CHECKPOINT_CAUSE_XLOG) ? \"wal \" : \"\",\n> > > > + (flags & CHECKPOINT_CAUSE_TIME) ? \"time \" : \"\",\n> > > > + (flags & CHECKPOINT_FLUSH_ALL) ? \"flush-all\" : \"\");\n> > >\n> > > Fixed.\n> > > ---\n> > >\n> > > > 5) Do we need a special phase for this checkpoint operation? I'm not\n> > > > sure in which cases it will take a long time, but it looks like\n> > > > there's a wait loop here.\n> > > > vxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n> > > > if (nvxids > 0)\n> > > > {\n> > > > do\n> > > > {\n> > > > pg_usleep(10000L); /* wait for 10 msec */\n> > > > } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n> > > > }\n> > >\n> > > Yes. It is better to add a separate phase here.\n> > > ---\n> > >\n> > > > Also, how about special phases for SyncPostCheckpoint(),\n> > > > SyncPreCheckpoint(), InvalidateObsoleteReplicationSlots(),\n> > > > PreallocXlogFiles() (it currently pre-allocates only 1 WAL file, but\n> > > > it might be increase in future (?)), TruncateSUBTRANS()?\n> > >\n> > > SyncPreCheckpoint() is just incrementing a counter and\n> > > PreallocXlogFiles() currently pre-allocates only 1 WAL file. I feel\n> > > there is no need to add any phases for these as of now. We can add in\n> > > the future if necessary. Added phases for SyncPostCheckpoint(),\n> > > InvalidateObsoleteReplicationSlots() and TruncateSUBTRANS().\n> > > ---\n> > >\n> > > > 6) SLRU (Simple LRU) isn't a phase here, you can just say\n> > > > PROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES.\n> > > > +\n> > > > + pgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> > > > + PROGRESS_CHECKPOINT_PHASE_SLRU_PAGES);\n> > > > CheckPointPredicate();\n> > > >\n> > > > And :s/checkpointing SLRU pages/checkpointing predicate lock pages\n> > > >+ WHEN 9 THEN 'checkpointing SLRU pages'\n> > >\n> > > Fixed.\n> > > ---\n> > >\n> > > > 7) :s/PROGRESS_CHECKPOINT_PHASE_FILE_SYNC/PROGRESS_CHECKPOINT_PHASE_PROCESS_FILE_SYNC_REQUESTS\n> > >\n> > > I feel PROGRESS_CHECKPOINT_PHASE_FILE_SYNC is a better option here as\n> > > it describes the purpose in less words.\n> > >\n> > > > And :s/WHEN 11 THEN 'performing sync requests'/WHEN 11 THEN\n> > > > 'processing file sync requests'\n> > >\n> > > Fixed.\n> > > ---\n> > >\n> > > > 8) :s/Finalizing/finalizing\n> > > > + WHEN 14 THEN 'Finalizing'\n> > >\n> > > Fixed.\n> > > ---\n> > >\n> > > > 9) :s/checkpointing snapshots/checkpointing logical replication snapshot files\n> > > > + WHEN 3 THEN 'checkpointing snapshots'\n> > > > :s/checkpointing logical rewrite mappings/checkpointing logical\n> > > > replication rewrite mapping files\n> > > > + WHEN 4 THEN 'checkpointing logical rewrite mappings'\n> > >\n> > > Fixed.\n> > > ---\n> > >\n> > > > 10) I'm not sure if it's discussed, how about adding the number of\n> > > > snapshot/mapping files so far the checkpoint has processed in file\n> > > > processing while loops of\n> > > > CheckPointSnapBuild/CheckPointLogicalRewriteHeap? Sometimes, there can\n> > > > be many logical snapshot or mapping files and users may be interested\n> > > > in knowing the so-far-processed-file-count.\n> > >\n> > > I had thought about this while sharing the v1 patch and mentioned my\n> > > views upthread. I feel it won't give meaningful progress information\n> > > (It can be treated as statistics). Hence not included. Thoughts?\n> > >\n> > > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > > of backend that are waiting for a new checkpoint.\n> > > > >\n> > > > > Yeah, you are right. Let's not go that path and store an array of\n> > > > > pids. I don't see a strong use-case with the pid of the process\n> > > > > requesting checkpoint. If required, we can add it later once the\n> > > > > pg_stat_progress_checkpoint view gets in.\n> > > >\n> > > > I don't think that's really necessary to give the pid list.\n> > > >\n> > > > If you requested a new checkpoint, it doesn't matter if it's only your backend\n> > > > that triggered it, another backend or a few other dozen, the result will be the\n> > > > same and you have the information that the request has been seen. We could\n> > > > store just a bool for that but having a number instead also gives a bit more\n> > > > information and may allow you to detect some broken logic on your client code\n> > > > if it keeps increasing.\n> > >\n> > > It's a good metric to show in the view but the information is not\n> > > readily available. Additional code is required to calculate the number\n> > > of requests. Is it worth doing that? I feel this can be added later if\n> > > required.\n> > >\n> > > Please find the v4 patch attached and share your thoughts.\n> > >\n> > > Thanks & Regards,\n> > > Nitin Jadhav\n> > >\n> > > On Tue, Mar 1, 2022 at 2:27 PM Nitin Jadhav\n> > > <nitinjadhavpostgres@gmail.com> wrote:\n> > > >\n> > > > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > > > Do you ever see a negative LSN or something here?\n> > > > > > + ('0/0'::pg_lsn + (\n> > > > > > + CASE\n> > > > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > > > + ELSE (0)::numeric\n> > > > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > > > >\n> > > > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > > > around, but that means we have to take care to correctly display the\n> > > > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > > > the special-casing for negative values.\n> > > >\n> > > > Yes. The extra calculation is required here as we are storing unit64\n> > > > value in the variable of type int64. When we convert uint64 to int64\n> > > > then the bit pattern is preserved (so no data is lost). The high-order\n> > > > bit becomes the sign bit and if the sign bit is set, both the sign and\n> > > > magnitude of the value changes. To safely get the actual uint64 value\n> > > > whatever was assigned, we need the above calculations.\n> > > >\n> > > > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > > > when it doesn't do anything specific to the checkpoint at all?\n> > > > >\n> > > > > I hadn't thought of using the types' inout functions, but it looks\n> > > > > like timestamp IO functions use a formatted timestring, which won't\n> > > > > work with the epoch-based timestamp stored in the view.\n> > > >\n> > > > There is a variation of to_timestamp() which takes UNIX epoch (float8)\n> > > > as an argument and converts it to timestamptz but we cannot directly\n> > > > call this function with S.param4.\n> > > >\n> > > > TimestampTz\n> > > > GetCurrentTimestamp(void)\n> > > > {\n> > > > TimestampTz result;\n> > > > struct timeval tp;\n> > > >\n> > > > gettimeofday(&tp, NULL);\n> > > >\n> > > > result = (TimestampTz) tp.tv_sec -\n> > > > ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n> > > > result = (result * USECS_PER_SEC) + tp.tv_usec;\n> > > >\n> > > > return result;\n> > > > }\n> > > >\n> > > > S.param4 contains the output of the above function\n> > > > (GetCurrentTimestamp()) which returns Postgres epoch but the\n> > > > to_timestamp() expects UNIX epoch as input. So some calculation is\n> > > > required here. I feel the SQL 'to_timestamp(946684800 +\n> > > > (S.param4::float / 1000000)) AS start_time' works fine. The value\n> > > > '946684800' is equal to ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) *\n> > > > SECS_PER_DAY). I am not sure whether it is good practice to use this\n> > > > way. Kindly share your thoughts.\n> > > >\n> > > > Thanks & Regards,\n> > > > Nitin Jadhav\n> > > >\n> > > > On Mon, Feb 28, 2022 at 6:40 PM Matthias van de Meent\n> > > > <boekewurm+postgres@gmail.com> wrote:\n> > > > >\n> > > > > On Sun, 27 Feb 2022 at 16:14, Bharath Rupireddy\n> > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > 3) Why do we need this extra calculation for start_lsn?\n> > > > > > Do you ever see a negative LSN or something here?\n> > > > > > + ('0/0'::pg_lsn + (\n> > > > > > + CASE\n> > > > > > + WHEN (s.param3 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > > > > > + ELSE (0)::numeric\n> > > > > > + END + (s.param3)::numeric)) AS start_lsn,\n> > > > >\n> > > > > Yes: LSN can take up all of an uint64; whereas the pgstat column is a\n> > > > > bigint type; thus the signed int64. This cast is OK as it wraps\n> > > > > around, but that means we have to take care to correctly display the\n> > > > > LSN when it is > 0x7FFF_FFFF_FFFF_FFFF; which is what we do here using\n> > > > > the special-casing for negative values.\n> > > > >\n> > > > > As to whether it is reasonable: Generating 16GB of wal every second\n> > > > > (2^34 bytes /sec) is probably not impossible (cpu <> memory bandwidth\n> > > > > has been > 20GB/sec for a while); and that leaves you 2^29 seconds of\n> > > > > database runtime; or about 17 years. Seeing that a cluster can be\n> > > > > `pg_upgrade`d (which doesn't reset cluster LSN) since PG 9.0 from at\n> > > > > least version PG 8.4.0 (2009) (and through pg_migrator, from 8.3.0)),\n> > > > > we can assume that clusters hitting LSN=2^63 will be a reasonable\n> > > > > possibility within the next few years. As the lifespan of a PG release\n> > > > > is about 5 years, it doesn't seem impossible that there will be actual\n> > > > > clusters that are going to hit this naturally in the lifespan of PG15.\n> > > > >\n> > > > > It is also possible that someone fat-fingers pg_resetwal; and creates\n> > > > > a cluster with LSN >= 2^63; resulting in negative values in the\n> > > > > s.param3 field. Not likely, but we can force such situations; and as\n> > > > > such we should handle that gracefully.\n> > > > >\n> > > > > > 4) Can't you use timestamptz_in(to_char(s.param4)) instead of\n> > > > > > pg_stat_get_progress_checkpoint_start_time? I don't quite understand\n> > > > > > the reasoning for having this function and it's named as *checkpoint*\n> > > > > > when it doesn't do anything specific to the checkpoint at all?\n> > > > >\n> > > > > I hadn't thought of using the types' inout functions, but it looks\n> > > > > like timestamp IO functions use a formatted timestring, which won't\n> > > > > work with the epoch-based timestamp stored in the view.\n> > > > >\n> > > > > > Having 3 unnecessary functions that aren't useful to the users at all\n> > > > > > in proc.dat will simply eatup the function oids IMO. Hence, I suggest\n> > > > > > let's try to do without extra functions.\n> > > > >\n> > > > > I agree that (1) could be simplified, or at least fully expressed in\n> > > > > SQL without exposing too many internals. If we're fine with exposing\n> > > > > internals like flags and type layouts, then (2), and arguably (4), can\n> > > > > be expressed in SQL as well.\n> > > > >\n> > > > > -Matthias\n\n\n", "msg_date": "Tue, 8 Mar 2022 20:30:49 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > of backend that are waiting for a new checkpoint.\n> >\n> > It's a good metric to show in the view but the information is not\n> > readily available. Additional code is required to calculate the number\n> > of requests. Is it worth doing that? I feel this can be added later if\n> > required.\n>\n> Is it that hard or costly to do? Just sending a message to increment\n> the stat counter in RequestCheckpoint() would be enough.\n>\n> Also, unless I'm missing something it's still only showing the initial\n> checkpoint flags, so it's *not* showing what the checkpoint is really\n> doing, only what the checkpoint may be doing if nothing else happens.\n> It just feels wrong. You could even use that ckpt_flags info to know\n> that at least one backend has requested a new checkpoint, if you don't\n> want to have a number of backends.\n\nI just wanted to avoid extra calculations just to show the progress in\nthe view. Since it's a good metric, I have added an additional field\nnamed 'next_flags' to the view which holds all possible flag values of\nthe next checkpoint. This gives more information than just saying\nwhether the new checkpoint is requested or not with the same memory. I\nam updating the progress of 'next_flags' in\nImmediateCheckpointRequested() which gets called during buffer write\nphase. I gave a thought to update the progress in other places also\nbut I feel updating in ImmediateCheckpointRequested() is enough as the\ncurrent checkpoint behaviour gets affected by only\nCHECKPOINT_IMMEDIATE flag and all other checkpoint requests done in\ncase of createdb(), dropdb(), etc gets called with\nCHECKPOINT_IMMEDIATE flag. I have updated this in the v5 patch. Please\nshare your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Mar 3, 2022 at 11:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 7:15 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > > As mentioned upthread, there can be multiple backends that request a\n> > > > > checkpoint, so unless we want to store an array of pid we should store a number\n> > > > > of backend that are waiting for a new checkpoint.\n> >\n> > It's a good metric to show in the view but the information is not\n> > readily available. Additional code is required to calculate the number\n> > of requests. Is it worth doing that? I feel this can be added later if\n> > required.\n>\n> Is it that hard or costly to do? Just sending a message to increment\n> the stat counter in RequestCheckpoint() would be enough.\n>\n> Also, unless I'm missing something it's still only showing the initial\n> checkpoint flags, so it's *not* showing what the checkpoint is really\n> doing, only what the checkpoint may be doing if nothing else happens.\n> It just feels wrong. You could even use that ckpt_flags info to know\n> that at least one backend has requested a new checkpoint, if you don't\n> want to have a number of backends.\n\n\n", "msg_date": "Tue, 8 Mar 2022 20:57:23 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Tue, Mar 8, 2022 at 8:31 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > > -[ RECORD 1 ]-----+-------------------------------------\n> > > pid | 22043\n> > > type | checkpoint\n> > > kind | immediate force wait requested time\n> > >\n> > > I think the output in the kind column can be displayed as {immediate,\n> > > force, wait, requested, time}. By the way these are all checkpoint\n> > > flags so it is better to display it as checkpoint flags instead of\n> > > checkpoint kind as mentioned in one of my previous comments.\n> >\n> > I will update in the next patch.\n>\n> The current format matches with the server log message for the\n> checkpoint start in LogCheckpointStart(). Just to be consistent, I\n> have not changed the code.\n>\n\nSee below, how flags are shown in other sql functions like:\n\nashu@postgres=# select * from heap_tuple_infomask_flags(2304, 1);\n raw_flags | combined_flags\n-----------------------------------------+----------------\n {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n(1 row)\n\nThis looks more readable and it's easy to understand for the\nend-users.. Further comparing the way log messages are displayed with\nthe way sql functions display its output doesn't look like a right\ncomparison to me. Obviously both should show matching data but the way\nit is shown doesn't need to be the same. In fact it is not in most of\nthe cases.\n\n> I have taken care of the rest of the comments in v5 patch for which\n> there was clarity.\n>\n\nThank you very much. Will take a look at it later.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 9 Mar 2022 19:07:32 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Tue, Mar 08, 2022 at 08:57:23PM +0530, Nitin Jadhav wrote:\n> \n> I just wanted to avoid extra calculations just to show the progress in\n> the view. Since it's a good metric, I have added an additional field\n> named 'next_flags' to the view which holds all possible flag values of\n> the next checkpoint.\n\nI still don't think that's ok. IIUC the only way to know if the current\ncheckpoint is throttled or not is to be aware that the \"next_flags\" can apply\nto the current checkpoint too, look for it and see if that changes the\nsemantics of what the view say the current checkpoint is. Most users will get\nit wrong.\n\n> This gives more information than just saying\n> whether the new checkpoint is requested or not with the same memory.\n\nSo that next_flags will be empty most of the time? It seems confusing.\n\nAgain I would just display a bool flag saying whether a new checkpoint has been\nexplicitly requested or not, it seems enough.\n\nIf you're interested in that next checkpoint, you probably want a quick\ncompletion of the current checkpoint first (and thus need to know if it's\nthrottled or not). And then you will have to keep monitoring that view for the\nnext checkpoint anyway, and at that point the view will show the relevant\ninformation.\n\n\n", "msg_date": "Wed, 9 Mar 2022 22:17:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > The current format matches with the server log message for the\n> > checkpoint start in LogCheckpointStart(). Just to be consistent, I\n> > have not changed the code.\n> >\n>\n> See below, how flags are shown in other sql functions like:\n>\n> ashu@postgres=# select * from heap_tuple_infomask_flags(2304, 1);\n> raw_flags | combined_flags\n> -----------------------------------------+----------------\n> {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n> (1 row)\n>\n> This looks more readable and it's easy to understand for the\n> end-users.. Further comparing the way log messages are displayed with\n> the way sql functions display its output doesn't look like a right\n> comparison to me. Obviously both should show matching data but the way\n> it is shown doesn't need to be the same. In fact it is not in most of\n> the cases.\n\nok. I will take care in the next patch. I would like to handle this at\nthe SQL level in system_views.sql. The following can be used to\ndisplay in the format described above.\n\n ( '{' ||\n CASE WHEN (S.param2 & 4) > 0 THEN 'immediate' ELSE '' END ||\n CASE WHEN (S.param2 & 4) > 0 AND (S.param2 & -8) > 0 THEN ',\n' ELSE '' END ||\n CASE WHEN (S.param2 & 8) > 0 THEN 'force' ELSE '' END ||\n CASE WHEN (S.param2 & 8) > 0 AND (S.param2 & -16) > 0 THEN\n', ' ELSE '' END ||\n CASE WHEN (S.param2 & 16) > 0 THEN 'flush-all' ELSE '' END ||\n CASE WHEN (S.param2 & 16) > 0 AND (S.param2 & -32) > 0 THEN\n', ' ELSE '' END ||\n CASE WHEN (S.param2 & 32) > 0 THEN 'wait' ELSE '' END ||\n CASE WHEN (S.param2 & 32) > 0 AND (S.param2 & -128) > 0 THEN\n', ' ELSE '' END ||\n CASE WHEN (S.param2 & 128) > 0 THEN 'wal' ELSE '' END ||\n CASE WHEN (S.param2 & 128) > 0 AND (S.param2 & -256) > 0\nTHEN ', ' ELSE '' END ||\n CASE WHEN (S.param2 & 256) > 0 THEN 'time' ELSE '' END\n || '}'\n\nBasically, a separate CASE statement is used to decide whether a comma\nhas to be printed or not, which is done by checking whether the\nprevious flag bit is enabled (so that the appropriate flag has to be\ndisplayed) and if any next bits are enabled (So there are some more\nflags to be displayed). Kindly let me know if you know any other\nbetter approach.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Mar 9, 2022 at 7:07 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Tue, Mar 8, 2022 at 8:31 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > > [local]:5432 ashu@postgres=# select * from pg_stat_progress_checkpoint;\n> > > > -[ RECORD 1 ]-----+-------------------------------------\n> > > > pid | 22043\n> > > > type | checkpoint\n> > > > kind | immediate force wait requested time\n> > > >\n> > > > I think the output in the kind column can be displayed as {immediate,\n> > > > force, wait, requested, time}. By the way these are all checkpoint\n> > > > flags so it is better to display it as checkpoint flags instead of\n> > > > checkpoint kind as mentioned in one of my previous comments.\n> > >\n> > > I will update in the next patch.\n> >\n> > The current format matches with the server log message for the\n> > checkpoint start in LogCheckpointStart(). Just to be consistent, I\n> > have not changed the code.\n> >\n>\n> See below, how flags are shown in other sql functions like:\n>\n> ashu@postgres=# select * from heap_tuple_infomask_flags(2304, 1);\n> raw_flags | combined_flags\n> -----------------------------------------+----------------\n> {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n> (1 row)\n>\n> This looks more readable and it's easy to understand for the\n> end-users.. Further comparing the way log messages are displayed with\n> the way sql functions display its output doesn't look like a right\n> comparison to me. Obviously both should show matching data but the way\n> it is shown doesn't need to be the same. In fact it is not in most of\n> the cases.\n>\n> > I have taken care of the rest of the comments in v5 patch for which\n> > there was clarity.\n> >\n>\n> Thank you very much. Will take a look at it later.\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n\n\n", "msg_date": "Fri, 11 Mar 2022 14:17:59 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > I just wanted to avoid extra calculations just to show the progress in\n> > the view. Since it's a good metric, I have added an additional field\n> > named 'next_flags' to the view which holds all possible flag values of\n> > the next checkpoint.\n>\n> I still don't think that's ok. IIUC the only way to know if the current\n> checkpoint is throttled or not is to be aware that the \"next_flags\" can apply\n> to the current checkpoint too, look for it and see if that changes the\n> semantics of what the view say the current checkpoint is. Most users will get\n> it wrong.\n>\n> Again I would just display a bool flag saying whether a new checkpoint has been\n> explicitly requested or not, it seems enough.\n\nOk. I agree that it is difficult to interpret it correctly. So even if\nsay that a new checkpoint has been explicitly requested, the user may\nnot understand that it affects current checkpoint behaviour unless the\nuser knows the internals of the checkpoint. How about naming the field\nto 'throttled' (Yes/ No) since our objective is to show that the\ncurrent checkpoint is throttled or not.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Mar 9, 2022 at 7:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 08, 2022 at 08:57:23PM +0530, Nitin Jadhav wrote:\n> >\n> > I just wanted to avoid extra calculations just to show the progress in\n> > the view. Since it's a good metric, I have added an additional field\n> > named 'next_flags' to the view which holds all possible flag values of\n> > the next checkpoint.\n>\n> I still don't think that's ok. IIUC the only way to know if the current\n> checkpoint is throttled or not is to be aware that the \"next_flags\" can apply\n> to the current checkpoint too, look for it and see if that changes the\n> semantics of what the view say the current checkpoint is. Most users will get\n> it wrong.\n>\n> > This gives more information than just saying\n> > whether the new checkpoint is requested or not with the same memory.\n>\n> So that next_flags will be empty most of the time? It seems confusing.\n>\n> Again I would just display a bool flag saying whether a new checkpoint has been\n> explicitly requested or not, it seems enough.\n>\n> If you're interested in that next checkpoint, you probably want a quick\n> completion of the current checkpoint first (and thus need to know if it's\n> throttled or not). And then you will have to keep monitoring that view for the\n> next checkpoint anyway, and at that point the view will show the relevant\n> information.\n\n\n", "msg_date": "Fri, 11 Mar 2022 14:41:23 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Mar 11, 2022 at 02:41:23PM +0530, Nitin Jadhav wrote:\n>\n> Ok. I agree that it is difficult to interpret it correctly. So even if\n> say that a new checkpoint has been explicitly requested, the user may\n> not understand that it affects current checkpoint behaviour unless the\n> user knows the internals of the checkpoint. How about naming the field\n> to 'throttled' (Yes/ No) since our objective is to show that the\n> current checkpoint is throttled or not.\n\n-1\n\nThat \"throttled\" flag should be the same as having or not a \"force\" in the\nflags. We should be consistent and report information the same way, so either\na lot of flags (is_throttled, is_force...) or as now a single field containing\nthe set flags, so the current approach seems better. Also, it wouldn't be much\nbetter to show the checkpoint as not having the force flags and still not being\nthrottled.\n\nWhy not just reporting (ckpt_flags & (CHECKPOINT_REQUESTED |\nCHECKPOINT_IMMEDIATE)) in the path(s) that can update the new flags for the\nview?\n\nCHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\nto detect that someone wants a new checkpoint afterwards, whatever it's and\nwhether or not the current checkpoint to be finished quickly. For this flag I\nthink it's better to not report it in the view flags but with a new field, as\ndiscussed before, as it's really what it means.\n\nCHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\nprogress checkpoint, so it can be simply added to the view flags.\n\n\n", "msg_date": "Fri, 11 Mar 2022 18:04:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> >\n> > Ok. I agree that it is difficult to interpret it correctly. So even if\n> > say that a new checkpoint has been explicitly requested, the user may\n> > not understand that it affects current checkpoint behaviour unless the\n> > user knows the internals of the checkpoint. How about naming the field\n> > to 'throttled' (Yes/ No) since our objective is to show that the\n> > current checkpoint is throttled or not.\n>\n> -1\n>\n> That \"throttled\" flag should be the same as having or not a \"force\" in the\n> flags. We should be consistent and report information the same way, so either\n> a lot of flags (is_throttled, is_force...) or as now a single field containing\n> the set flags, so the current approach seems better. Also, it wouldn't be much\n> better to show the checkpoint as not having the force flags and still not being\n> throttled.\n\nI think your understanding is wrong here. The flag which affects\nthrottling behaviour is CHECKPOINT_IMMEDIATE. I am not suggesting\nremoving the existing 'flags' field of pg_stat_progress_checkpoint\nview and adding a new field 'throttled'. The content of the 'flags'\nfield remains the same. I was suggesting replacing the 'next_flags'\nfield with 'throttled' field since the new request with\nCHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n\n> CHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\n> to detect that someone wants a new checkpoint afterwards, whatever it's and\n> whether or not the current checkpoint to be finished quickly. For this flag I\n> think it's better to not report it in the view flags but with a new field, as\n> discussed before, as it's really what it means.\n\nI understand your suggestion of adding a new field to indicate whether\nany of the new requests have been made or not. You just want this\nfield to represent only a new request or does it also represent the\ncurrent checkpoint to finish quickly.\n\n> CHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\n> progress checkpoint, so it can be simply added to the view flags.\n\nAs discussed upthread this is not advisable to do so. The content of\n'flags' remains the same through the checkpoint. We cannot add a new\ncheckpoint's flag (CHECKPOINT_IMMEDIATE ) to the current one even\nthough it affects current checkpoint behaviour. Only thing we can do\nis to add a new field to show that the current checkpoint is affected\nwith new requests.\n\n> Why not just reporting (ckpt_flags & (CHECKPOINT_REQUESTED |\n> CHECKPOINT_IMMEDIATE)) in the path(s) that can update the new flags for the\n> view?\n\nWhere do you want to add this in the path?\n\nI feel the new field name is confusing here.\n'next_flags' - It shows all the flag values of the next checkpoint.\nBased on this user can get to know that the new request has been made\nand also if CHECKPOINT_IMMEDIATE is enabled here, then it indicates\nthat the current checkpoint also gets affected. You are not ok to use\nthis name as it confuses the user.\n'throttled' - The value will be set to Yes/No based on the\nCHECKPOINT_IMMEDIATE bit set in the new checkpoint request's flag.\nThis says that the current checkpoint is affected and also I thought\nthis is an indication that new requests have been made. But there is a\nconfusion here too. If the current checkpoint starts with\nCHECKPOINT_IMMEDIATE which is described by the 'flags' field and there\nis no new request, then the value of this field is 'Yes' (Not\nthrottling) which again confuses the user.\n'new request' - The value will be set to Yes/No if any new checkpoint\nrequests. This just indicates whether new requests have been made or\nnot. It can not be used to infer other information.\n\nThought?\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Mar 11, 2022 at 3:34 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Mar 11, 2022 at 02:41:23PM +0530, Nitin Jadhav wrote:\n> >\n> > Ok. I agree that it is difficult to interpret it correctly. So even if\n> > say that a new checkpoint has been explicitly requested, the user may\n> > not understand that it affects current checkpoint behaviour unless the\n> > user knows the internals of the checkpoint. How about naming the field\n> > to 'throttled' (Yes/ No) since our objective is to show that the\n> > current checkpoint is throttled or not.\n>\n> -1\n>\n> That \"throttled\" flag should be the same as having or not a \"force\" in the\n> flags. We should be consistent and report information the same way, so either\n> a lot of flags (is_throttled, is_force...) or as now a single field containing\n> the set flags, so the current approach seems better. Also, it wouldn't be much\n> better to show the checkpoint as not having the force flags and still not being\n> throttled.\n>\n> Why not just reporting (ckpt_flags & (CHECKPOINT_REQUESTED |\n> CHECKPOINT_IMMEDIATE)) in the path(s) that can update the new flags for the\n> view?\n>\n> CHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\n> to detect that someone wants a new checkpoint afterwards, whatever it's and\n> whether or not the current checkpoint to be finished quickly. For this flag I\n> think it's better to not report it in the view flags but with a new field, as\n> discussed before, as it's really what it means.\n>\n> CHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\n> progress checkpoint, so it can be simply added to the view flags.\n\n\n", "msg_date": "Fri, 11 Mar 2022 16:59:11 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Mar 11, 2022 at 04:59:11PM +0530, Nitin Jadhav wrote:\n> > That \"throttled\" flag should be the same as having or not a \"force\" in the\n> > flags. We should be consistent and report information the same way, so either\n> > a lot of flags (is_throttled, is_force...) or as now a single field containing\n> > the set flags, so the current approach seems better. Also, it wouldn't be much\n> > better to show the checkpoint as not having the force flags and still not being\n> > throttled.\n> \n> I think your understanding is wrong here. The flag which affects\n> throttling behaviour is CHECKPOINT_IMMEDIATE.\n\nYes sorry, that's what I meant and later used in the flags.\n\n> I am not suggesting\n> removing the existing 'flags' field of pg_stat_progress_checkpoint\n> view and adding a new field 'throttled'. The content of the 'flags'\n> field remains the same. I was suggesting replacing the 'next_flags'\n> field with 'throttled' field since the new request with\n> CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n\nAre you saying that this new throttled flag will only be set by the overloaded\nflags in ckpt_flags? So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\nflags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\nthat's not throttled?\n\n> > CHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\n> > to detect that someone wants a new checkpoint afterwards, whatever it's and\n> > whether or not the current checkpoint to be finished quickly. For this flag I\n> > think it's better to not report it in the view flags but with a new field, as\n> > discussed before, as it's really what it means.\n> \n> I understand your suggestion of adding a new field to indicate whether\n> any of the new requests have been made or not. You just want this\n> field to represent only a new request or does it also represent the\n> current checkpoint to finish quickly.\n\nOnly represent what it means: a new checkpoint is requested. An additional\nCHECKPOINT_IMMEDIATE flag is orthogonal to this flag and this information.\n\n> > CHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\n> > progress checkpoint, so it can be simply added to the view flags.\n> \n> As discussed upthread this is not advisable to do so. The content of\n> 'flags' remains the same through the checkpoint. We cannot add a new\n> checkpoint's flag (CHECKPOINT_IMMEDIATE ) to the current one even\n> though it affects current checkpoint behaviour. Only thing we can do\n> is to add a new field to show that the current checkpoint is affected\n> with new requests.\n\nI don't get it. The checkpoint flags and the view flags (set by\npgstat_progrss_update*) are different, so why can't we add this flag to the\nview flags? The fact that checkpointer.c doesn't update the passed flag and\ninstead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\nan implementation detail, and the view shouldn't focus on which flags were\ninitially passed to the checkpointer but instead which flags the checkpointer\nis actually enforcing, as that's what the user should be interested in. If you\nwant to store it in another field internally but display it in the view with\nthe rest of the flags, I'm fine with it.\n\n> > Why not just reporting (ckpt_flags & (CHECKPOINT_REQUESTED |\n> > CHECKPOINT_IMMEDIATE)) in the path(s) that can update the new flags for the\n> > view?\n> \n> Where do you want to add this in the path?\n\nSame as in your current patch I guess.\n\n\n", "msg_date": "Fri, 11 Mar 2022 20:13:00 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > I am not suggesting\n> > removing the existing 'flags' field of pg_stat_progress_checkpoint\n> > view and adding a new field 'throttled'. The content of the 'flags'\n> > field remains the same. I was suggesting replacing the 'next_flags'\n> > field with 'throttled' field since the new request with\n> > CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n>\n> Are you saying that this new throttled flag will only be set by the overloaded\n> flags in ckpt_flags?\n\nYes. you are right.\n\n> So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\n> flags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\n> that's not throttled?\n\nI think it's the reverse. A checkpoint with a CHECKPOINT_IMMEDIATE\nflags that's not throttled (disables delays between writes) and a\ncheckpoint without the CHECKPOINT_IMMEDIATE flag that's throttled\n(enables delays between writes)\n\n> > > CHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\n> > > to detect that someone wants a new checkpoint afterwards, whatever it's and\n> > > whether or not the current checkpoint to be finished quickly. For this flag I\n> > > think it's better to not report it in the view flags but with a new field, as\n> > > discussed before, as it's really what it means.\n> >\n> > I understand your suggestion of adding a new field to indicate whether\n> > any of the new requests have been made or not. You just want this\n> > field to represent only a new request or does it also represent the\n> > current checkpoint to finish quickly.\n>\n> Only represent what it means: a new checkpoint is requested. An additional\n> CHECKPOINT_IMMEDIATE flag is orthogonal to this flag and this information.\n\nThanks for the confirmation.\n\n> > > CHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\n> > > progress checkpoint, so it can be simply added to the view flags.\n> >\n> > As discussed upthread this is not advisable to do so. The content of\n> > 'flags' remains the same through the checkpoint. We cannot add a new\n> > checkpoint's flag (CHECKPOINT_IMMEDIATE ) to the current one even\n> > though it affects current checkpoint behaviour. Only thing we can do\n> > is to add a new field to show that the current checkpoint is affected\n> > with new requests.\n>\n> I don't get it. The checkpoint flags and the view flags (set by\n> pgstat_progrss_update*) are different, so why can't we add this flag to the\n> view flags? The fact that checkpointer.c doesn't update the passed flag and\n> instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> an implementation detail, and the view shouldn't focus on which flags were\n> initially passed to the checkpointer but instead which flags the checkpointer\n> is actually enforcing, as that's what the user should be interested in. If you\n> want to store it in another field internally but display it in the view with\n> the rest of the flags, I'm fine with it.\n\nJust to be in sync with the way code behaves, it is better not to\nupdate the next checkpoint request's CHECKPOINT_IMMEDIATE with the\ncurrent checkpoint 'flags' field. Because the current checkpoint\nstarts with a different set of flags and when there is a new request\n(with CHECKPOINT_IMMEDIATE), it just processes the pending operations\nquickly to take up next requests. If we update this information in the\n'flags' field of the view, it says that the current checkpoint is\nstarted with CHECKPOINT_IMMEDIATE which is not true. Hence I had\nthought of adding a new field ('next flags' or 'upcoming flags') which\ncontain all the flag values of new checkpoint requests. This field\nindicates whether the current checkpoint is throttled or not and also\nit indicates there are new requests. Please share your thoughts. More\nthoughts are welcomed.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\nOn Fri, Mar 11, 2022 at 5:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Mar 11, 2022 at 04:59:11PM +0530, Nitin Jadhav wrote:\n> > > That \"throttled\" flag should be the same as having or not a \"force\" in the\n> > > flags. We should be consistent and report information the same way, so either\n> > > a lot of flags (is_throttled, is_force...) or as now a single field containing\n> > > the set flags, so the current approach seems better. Also, it wouldn't be much\n> > > better to show the checkpoint as not having the force flags and still not being\n> > > throttled.\n> >\n> > I think your understanding is wrong here. The flag which affects\n> > throttling behaviour is CHECKPOINT_IMMEDIATE.\n>\n> Yes sorry, that's what I meant and later used in the flags.\n>\n> > I am not suggesting\n> > removing the existing 'flags' field of pg_stat_progress_checkpoint\n> > view and adding a new field 'throttled'. The content of the 'flags'\n> > field remains the same. I was suggesting replacing the 'next_flags'\n> > field with 'throttled' field since the new request with\n> > CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n>\n> Are you saying that this new throttled flag will only be set by the overloaded\n> flags in ckpt_flags? So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\n> flags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\n> that's not throttled?\n>\n> > > CHECKPOINT_REQUESTED will always be set by RequestCheckpoint(), and can be used\n> > > to detect that someone wants a new checkpoint afterwards, whatever it's and\n> > > whether or not the current checkpoint to be finished quickly. For this flag I\n> > > think it's better to not report it in the view flags but with a new field, as\n> > > discussed before, as it's really what it means.\n> >\n> > I understand your suggestion of adding a new field to indicate whether\n> > any of the new requests have been made or not. You just want this\n> > field to represent only a new request or does it also represent the\n> > current checkpoint to finish quickly.\n>\n> Only represent what it means: a new checkpoint is requested. An additional\n> CHECKPOINT_IMMEDIATE flag is orthogonal to this flag and this information.\n>\n> > > CHECKPOINT_IMMEDIATE is the only new flag that can be used in an already in\n> > > progress checkpoint, so it can be simply added to the view flags.\n> >\n> > As discussed upthread this is not advisable to do so. The content of\n> > 'flags' remains the same through the checkpoint. We cannot add a new\n> > checkpoint's flag (CHECKPOINT_IMMEDIATE ) to the current one even\n> > though it affects current checkpoint behaviour. Only thing we can do\n> > is to add a new field to show that the current checkpoint is affected\n> > with new requests.\n>\n> I don't get it. The checkpoint flags and the view flags (set by\n> pgstat_progrss_update*) are different, so why can't we add this flag to the\n> view flags? The fact that checkpointer.c doesn't update the passed flag and\n> instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> an implementation detail, and the view shouldn't focus on which flags were\n> initially passed to the checkpointer but instead which flags the checkpointer\n> is actually enforcing, as that's what the user should be interested in. If you\n> want to store it in another field internally but display it in the view with\n> the rest of the flags, I'm fine with it.\n>\n> > > Why not just reporting (ckpt_flags & (CHECKPOINT_REQUESTED |\n> > > CHECKPOINT_IMMEDIATE)) in the path(s) that can update the new flags for the\n> > > view?\n> >\n> > Where do you want to add this in the path?\n>\n> Same as in your current patch I guess.\n\n\n", "msg_date": "Mon, 14 Mar 2022 15:16:50 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Mon, Mar 14, 2022 at 03:16:50PM +0530, Nitin Jadhav wrote:\n> > > I am not suggesting\n> > > removing the existing 'flags' field of pg_stat_progress_checkpoint\n> > > view and adding a new field 'throttled'. The content of the 'flags'\n> > > field remains the same. I was suggesting replacing the 'next_flags'\n> > > field with 'throttled' field since the new request with\n> > > CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n> >\n> > Are you saying that this new throttled flag will only be set by the overloaded\n> > flags in ckpt_flags?\n>\n> Yes. you are right.\n>\n> > So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\n> > flags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\n> > that's not throttled?\n>\n> I think it's the reverse. A checkpoint with a CHECKPOINT_IMMEDIATE\n> flags that's not throttled (disables delays between writes) and a\n> checkpoint without the CHECKPOINT_IMMEDIATE flag that's throttled\n> (enables delays between writes)\n\nYes that's how it's supposed to work, but my point was that your suggested\n'throttled' flag could say the opposite, which is bad.\n\n> > I don't get it. The checkpoint flags and the view flags (set by\n> > pgstat_progrss_update*) are different, so why can't we add this flag to the\n> > view flags? The fact that checkpointer.c doesn't update the passed flag and\n> > instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> > an implementation detail, and the view shouldn't focus on which flags were\n> > initially passed to the checkpointer but instead which flags the checkpointer\n> > is actually enforcing, as that's what the user should be interested in. If you\n> > want to store it in another field internally but display it in the view with\n> > the rest of the flags, I'm fine with it.\n>\n> Just to be in sync with the way code behaves, it is better not to\n> update the next checkpoint request's CHECKPOINT_IMMEDIATE with the\n> current checkpoint 'flags' field. Because the current checkpoint\n> starts with a different set of flags and when there is a new request\n> (with CHECKPOINT_IMMEDIATE), it just processes the pending operations\n> quickly to take up next requests. If we update this information in the\n> 'flags' field of the view, it says that the current checkpoint is\n> started with CHECKPOINT_IMMEDIATE which is not true.\n\nWhich is why I suggested to only take into account CHECKPOINT_REQUESTED (to\nbe able to display that a new checkpoint was requested) and\nCHECKPOINT_IMMEDIATE, to be able to display that the current checkpoint isn't\nthrottled anymore if it were.\n\nI still don't understand why you want so much to display \"how the checkpoint\nwas initially started\" rather than \"how the checkpoint is really behaving right\nnow\". The whole point of having a progress view is to have something dynamic\nthat reflects the current activity.\n\n> Hence I had\n> thought of adding a new field ('next flags' or 'upcoming flags') which\n> contain all the flag values of new checkpoint requests. This field\n> indicates whether the current checkpoint is throttled or not and also\n> it indicates there are new requests.\n\nI'm not opposed to having such a field, I'm opposed to having a view with \"the\ncurrent checkpoint is throttled but if there are some flags in the next\ncheckpoint flags and those flags contain checkpoint immediate then the current\ncheckpoint isn't actually throttled anymore\" behavior.\n\n\n", "msg_date": "Mon, 14 Mar 2022 19:45:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > > I don't get it. The checkpoint flags and the view flags (set by\n> > > pgstat_progrss_update*) are different, so why can't we add this flag to the\n> > > view flags? The fact that checkpointer.c doesn't update the passed flag and\n> > > instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> > > an implementation detail, and the view shouldn't focus on which flags were\n> > > initially passed to the checkpointer but instead which flags the checkpointer\n> > > is actually enforcing, as that's what the user should be interested in. If you\n> > > want to store it in another field internally but display it in the view with\n> > > the rest of the flags, I'm fine with it.\n> >\n> > Just to be in sync with the way code behaves, it is better not to\n> > update the next checkpoint request's CHECKPOINT_IMMEDIATE with the\n> > current checkpoint 'flags' field. Because the current checkpoint\n> > starts with a different set of flags and when there is a new request\n> > (with CHECKPOINT_IMMEDIATE), it just processes the pending operations\n> > quickly to take up next requests. If we update this information in the\n> > 'flags' field of the view, it says that the current checkpoint is\n> > started with CHECKPOINT_IMMEDIATE which is not true.\n>\n> Which is why I suggested to only take into account CHECKPOINT_REQUESTED (to\n> be able to display that a new checkpoint was requested)\n\nI will take care in the next patch.\n\n> > Hence I had\n> > thought of adding a new field ('next flags' or 'upcoming flags') which\n> > contain all the flag values of new checkpoint requests. This field\n> > indicates whether the current checkpoint is throttled or not and also\n> > it indicates there are new requests.\n>\n> I'm not opposed to having such a field, I'm opposed to having a view with \"the\n> current checkpoint is throttled but if there are some flags in the next\n> checkpoint flags and those flags contain checkpoint immediate then the current\n> checkpoint isn't actually throttled anymore\" behavior.\n\nI understand your point and I also agree that it becomes difficult for\nthe user to understand the context.\n\n> and\n> CHECKPOINT_IMMEDIATE, to be able to display that the current checkpoint isn't\n> throttled anymore if it were.\n>\n> I still don't understand why you want so much to display \"how the checkpoint\n> was initially started\" rather than \"how the checkpoint is really behaving right\n> now\". The whole point of having a progress view is to have something dynamic\n> that reflects the current activity.\n\nAs of now I will not consider adding this information to the view. If\nrequired and nobody opposes having this included in the 'flags' field\nof the view, then I will consider adding.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Mar 14, 2022 at 5:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Mar 14, 2022 at 03:16:50PM +0530, Nitin Jadhav wrote:\n> > > > I am not suggesting\n> > > > removing the existing 'flags' field of pg_stat_progress_checkpoint\n> > > > view and adding a new field 'throttled'. The content of the 'flags'\n> > > > field remains the same. I was suggesting replacing the 'next_flags'\n> > > > field with 'throttled' field since the new request with\n> > > > CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n> > >\n> > > Are you saying that this new throttled flag will only be set by the overloaded\n> > > flags in ckpt_flags?\n> >\n> > Yes. you are right.\n> >\n> > > So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\n> > > flags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\n> > > that's not throttled?\n> >\n> > I think it's the reverse. A checkpoint with a CHECKPOINT_IMMEDIATE\n> > flags that's not throttled (disables delays between writes) and a\n> > checkpoint without the CHECKPOINT_IMMEDIATE flag that's throttled\n> > (enables delays between writes)\n>\n> Yes that's how it's supposed to work, but my point was that your suggested\n> 'throttled' flag could say the opposite, which is bad.\n>\n> > > I don't get it. The checkpoint flags and the view flags (set by\n> > > pgstat_progrss_update*) are different, so why can't we add this flag to the\n> > > view flags? The fact that checkpointer.c doesn't update the passed flag and\n> > > instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> > > an implementation detail, and the view shouldn't focus on which flags were\n> > > initially passed to the checkpointer but instead which flags the checkpointer\n> > > is actually enforcing, as that's what the user should be interested in. If you\n> > > want to store it in another field internally but display it in the view with\n> > > the rest of the flags, I'm fine with it.\n> >\n> > Just to be in sync with the way code behaves, it is better not to\n> > update the next checkpoint request's CHECKPOINT_IMMEDIATE with the\n> > current checkpoint 'flags' field. Because the current checkpoint\n> > starts with a different set of flags and when there is a new request\n> > (with CHECKPOINT_IMMEDIATE), it just processes the pending operations\n> > quickly to take up next requests. If we update this information in the\n> > 'flags' field of the view, it says that the current checkpoint is\n> > started with CHECKPOINT_IMMEDIATE which is not true.\n>\n> Which is why I suggested to only take into account CHECKPOINT_REQUESTED (to\n> be able to display that a new checkpoint was requested) and\n> CHECKPOINT_IMMEDIATE, to be able to display that the current checkpoint isn't\n> throttled anymore if it were.\n>\n> I still don't understand why you want so much to display \"how the checkpoint\n> was initially started\" rather than \"how the checkpoint is really behaving right\n> now\". The whole point of having a progress view is to have something dynamic\n> that reflects the current activity.\n>\n> > Hence I had\n> > thought of adding a new field ('next flags' or 'upcoming flags') which\n> > contain all the flag values of new checkpoint requests. This field\n> > indicates whether the current checkpoint is throttled or not and also\n> > it indicates there are new requests.\n>\n> I'm not opposed to having such a field, I'm opposed to having a view with \"the\n> current checkpoint is throttled but if there are some flags in the next\n> checkpoint flags and those flags contain checkpoint immediate then the current\n> checkpoint isn't actually throttled anymore\" behavior.\n\n\n", "msg_date": "Fri, 18 Mar 2022 16:52:52 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nThis is a long thread, sorry for asking if this has been asked before.\n\nOn 2022-03-08 20:25:28 +0530, Nitin Jadhav wrote:\n> \t * Sort buffers that need to be written to reduce the likelihood of random\n> @@ -2129,6 +2132,8 @@ BufferSync(int flags)\n> \t\tbufHdr = GetBufferDescriptor(buf_id);\n> \n> \t\tnum_processed++;\n> +\t\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_BUFFERS_PROCESSED,\n> +\t\t\t\t\t\t\t\t\t num_processed);\n> \n> \t\t/*\n> \t\t * We don't need to acquire the lock here, because we're only looking\n> @@ -2149,6 +2154,8 @@ BufferSync(int flags)\n> \t\t\t\tTRACE_POSTGRESQL_BUFFER_SYNC_WRITTEN(buf_id);\n> \t\t\t\tPendingCheckpointerStats.m_buf_written_checkpoints++;\n> \t\t\t\tnum_written++;\n> +\t\t\t\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_BUFFERS_WRITTEN,\n> +\t\t\t\t\t\t\t\t\t\t\t num_written);\n> \t\t\t}\n> \t\t}\n\nHave you measured the performance effects of this? On fast storage with large\nshared_buffers I've seen these loops in profiles. It's probably fine, but it'd\nbe good to verify that.\n\n\n> @@ -1897,6 +1897,112 @@ pg_stat_progress_basebackup| SELECT s.pid,\n> s.param4 AS tablespaces_total,\n> s.param5 AS tablespaces_streamed\n> FROM pg_stat_get_progress_info('BASEBACKUP'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20);\n> +pg_stat_progress_checkpoint| SELECT s.pid,\n> + CASE s.param1\n> + WHEN 1 THEN 'checkpoint'::text\n> + WHEN 2 THEN 'restartpoint'::text\n> + ELSE NULL::text\n> + END AS type,\n> + (((((((\n> + CASE\n> + WHEN ((s.param2 & (1)::bigint) > 0) THEN 'shutdown '::text\n> + ELSE ''::text\n> + END ||\n> + CASE\n> + WHEN ((s.param2 & (2)::bigint) > 0) THEN 'end-of-recovery '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (4)::bigint) > 0) THEN 'immediate '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (8)::bigint) > 0) THEN 'force '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (16)::bigint) > 0) THEN 'flush-all '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (32)::bigint) > 0) THEN 'wait '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (128)::bigint) > 0) THEN 'wal '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param2 & (256)::bigint) > 0) THEN 'time '::text\n> + ELSE ''::text\n> + END) AS flags,\n> + (((((((\n> + CASE\n> + WHEN ((s.param3 & (1)::bigint) > 0) THEN 'shutdown '::text\n> + ELSE ''::text\n> + END ||\n> + CASE\n> + WHEN ((s.param3 & (2)::bigint) > 0) THEN 'end-of-recovery '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (4)::bigint) > 0) THEN 'immediate '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (8)::bigint) > 0) THEN 'force '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (16)::bigint) > 0) THEN 'flush-all '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (32)::bigint) > 0) THEN 'wait '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (128)::bigint) > 0) THEN 'wal '::text\n> + ELSE ''::text\n> + END) ||\n> + CASE\n> + WHEN ((s.param3 & (256)::bigint) > 0) THEN 'time '::text\n> + ELSE ''::text\n> + END) AS next_flags,\n> + ('0/0'::pg_lsn + (\n> + CASE\n> + WHEN (s.param4 < 0) THEN pow((2)::numeric, (64)::numeric)\n> + ELSE (0)::numeric\n> + END + (s.param4)::numeric)) AS start_lsn,\n> + to_timestamp(((946684800)::double precision + ((s.param5)::double precision / (1000000)::double precision))) AS start_time,\n> + CASE s.param6\n> + WHEN 1 THEN 'initializing'::text\n> + WHEN 2 THEN 'getting virtual transaction IDs'::text\n> + WHEN 3 THEN 'checkpointing replication slots'::text\n> + WHEN 4 THEN 'checkpointing logical replication snapshot files'::text\n> + WHEN 5 THEN 'checkpointing logical rewrite mapping files'::text\n> + WHEN 6 THEN 'checkpointing replication origin'::text\n> + WHEN 7 THEN 'checkpointing commit log pages'::text\n> + WHEN 8 THEN 'checkpointing commit time stamp pages'::text\n> + WHEN 9 THEN 'checkpointing subtransaction pages'::text\n> + WHEN 10 THEN 'checkpointing multixact pages'::text\n> + WHEN 11 THEN 'checkpointing predicate lock pages'::text\n> + WHEN 12 THEN 'checkpointing buffers'::text\n> + WHEN 13 THEN 'processing file sync requests'::text\n> + WHEN 14 THEN 'performing two phase checkpoint'::text\n> + WHEN 15 THEN 'performing post checkpoint cleanup'::text\n> + WHEN 16 THEN 'invalidating replication slots'::text\n> + WHEN 17 THEN 'recycling old WAL files'::text\n> + WHEN 18 THEN 'truncating subtransactions'::text\n> + WHEN 19 THEN 'finalizing'::text\n> + ELSE NULL::text\n> + END AS phase,\n> + s.param7 AS buffers_total,\n> + s.param8 AS buffers_processed,\n> + s.param9 AS buffers_written,\n> + s.param10 AS files_total,\n> + s.param11 AS files_synced\n> + FROM pg_stat_get_progress_info('CHECKPOINT'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20);\n> pg_stat_progress_cluster| SELECT s.pid,\n> s.datid,\n> d.datname,\n\nThis view is depressingly complicated. Added up the view definitions for\nthe already existing pg_stat_progress* views add up to a measurable part of\nthe size of an empty database:\n\npostgres[1160866][1]=# SELECT sum(octet_length(ev_action)), SUM(pg_column_size(ev_action)) FROM pg_rewrite WHERE ev_class::regclass::text LIKE '%progress%';\n┌───────┬───────┐\n│ sum │ sum │\n├───────┼───────┤\n│ 97410 │ 19786 │\n└───────┴───────┘\n(1 row)\n\nand this view looks to be a good bit more complicated than the existing\npg_stat_progress* views.\n\nIndeed:\ntemplate1[1165473][1]=# SELECT ev_class::regclass, length(ev_action), pg_column_size(ev_action) FROM pg_rewrite WHERE ev_class::regclass::text LIKE '%progress%' ORDER BY length(ev_action) DESC;\n┌───────────────────────────────┬────────┬────────────────┐\n│ ev_class │ length │ pg_column_size │\n├───────────────────────────────┼────────┼────────────────┤\n│ pg_stat_progress_checkpoint │ 43290 │ 5409 │\n│ pg_stat_progress_create_index │ 23293 │ 4177 │\n│ pg_stat_progress_cluster │ 18390 │ 3704 │\n│ pg_stat_progress_analyze │ 16121 │ 3339 │\n│ pg_stat_progress_vacuum │ 16076 │ 3392 │\n│ pg_stat_progress_copy │ 15124 │ 3080 │\n│ pg_stat_progress_basebackup │ 8406 │ 2094 │\n└───────────────────────────────┴────────┴────────────────┘\n(7 rows)\n\npg_rewrite without pg_stat_progress_checkpoint: 745472, with: 753664\n\n\npg_rewrite is the second biggest relation in an empty database already...\n\ntemplate1[1164827][1]=# SELECT relname, pg_total_relation_size(oid) FROM pg_class WHERE relkind = 'r' ORDER BY 2 DESC LIMIT 5;\n┌────────────────┬────────────────────────┐\n│ relname │ pg_total_relation_size │\n├────────────────┼────────────────────────┤\n│ pg_proc │ 1212416 │\n│ pg_rewrite │ 745472 │\n│ pg_attribute │ 704512 │\n│ pg_description │ 630784 │\n│ pg_collation │ 409600 │\n└────────────────┴────────────────────────┘\n(5 rows)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Mar 2022 17:15:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Mar 18, 2022 at 05:15:56PM -0700, Andres Freund wrote:\n> Have you measured the performance effects of this? On fast storage with large\n> shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> be good to verify that.\n\nI am wondering if we could make the function inlined at some point.\nWe could also play it safe and only update the counters every N loops\ninstead.\n\n> This view is depressingly complicated. Added up the view definitions for\n> the already existing pg_stat_progress* views add up to a measurable part of\n> the size of an empty database:\n\nYeah. I think that what's proposed could be simplified, and we had\nbetter remove the fields that are not that useful. First, do we have \nany need for next_flags? Second, is the start LSN really necessary\nfor monitoring purposes? Not all the information in the first\nparameter is useful, as well. For example \"shutdown\" will never be \nseen as it is not possible to use a session at this stage, no? There\nis also no gain in having \"immediate\", \"flush-all\", \"force\" and \"wait\"\n(for this one if the checkpoint is requested the session doing the\nwork knows this information already).\n\nA last thing is that we may gain in visibility by having more\nattributes as an effect of splitting param2. On thing that would make\nsense is to track the reason why the checkpoint was triggered\nseparately (aka wal and time). Should we use a text[] instead to list\nall the parameters instead? Using a space-separated list of items is\nnot intuitive IMO, and callers of this routine will likely parse\nthat.\n\nShouldn't we also track the number of files flushed in each sub-step?\nIn some deployments you could have a large number of 2PC files and\nsuch. We may want more information on such matters.\n\n+ WHEN 3 THEN 'checkpointing replication slots'\n+ WHEN 4 THEN 'checkpointing logical replication snapshot files'\n+ WHEN 5 THEN 'checkpointing logical rewrite mapping files'\n+ WHEN 6 THEN 'checkpointing replication origin'\n+ WHEN 7 THEN 'checkpointing commit log pages'\n+ WHEN 8 THEN 'checkpointing commit time stamp pages'\nThere is a lot of \"checkpointing\" here. All those terms could be\nshorter without losing their meaning.\n\nThis patch still needs some work, so I am marking it as RwF for now.\n--\nMichael", "msg_date": "Tue, 5 Apr 2022 18:45:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Sat, 19 Mar 2022 at 01:15, Andres Freund <andres@anarazel.de> wrote:\n> pg_rewrite without pg_stat_progress_checkpoint: 745472, with: 753664\n>\n> pg_rewrite is the second biggest relation in an empty database already...\n\nYeah, that's not great. Thanks for nerd-sniping me into looking into\nhow views and pg_rewrite rules work, that was very interesting and I\nlearned quite a lot.\n\n# Immediately potential, limited to progress views\n\nI noticed that the CASE-WHEN (used in translating progress stage index\nto stage names) in those progress reporting views can be more\nefficiently described (althoug with slightly worse behaviour around\nundefined values) using text array lookups (as attached). That\nresulted in somewhat smaller rewrite entries for the progress views\n(toast compression was good old pglz):\n\ntemplate1=# SELECT sum(octet_length(ev_action)),\nSUM(pg_column_size(ev_action)) FROM pg_rewrite WHERE\nev_class::regclass::text LIKE '%progress%';\n\nmaster:\n sum | sum\n-------+-------\n 97277 | 19956\npatched:\n sum | sum\n-------+-------\n 77069 | 18417\n\nSo this seems like a nice improvement of 20% uncompressed / 7% compressed.\n\nI tested various cases of phase number to text translations: `CASE ..\nWHEN`; `(ARRAY[]::text[])[index]` and `('{}'::text[])[index]`. See\nresults below:\n\npostgres=# create or replace view arrayliteral_view as select\n(ARRAY['a','b','c','d','e','f']::text[])[index] as name from tst\ns(index);\nCREATE INDEX\npostgres=# create or replace view stringcast_view as select\n('{a,b,c,d,e,f}'::text[])[index] as name from tst s(index);\nCREATE INDEX\npostgres=# create or replace view split_stringcast_view as select\n(('{a,b,' || 'c,d,e,f}')::text[])[index] as name from tst s(index);\nCREATE VIEW\npostgres=# create or replace view case_view as select case index when\n0 then 'a' when 1 then 'b' when 2 then 'c' when 3 then 'd' when 4 then\n'e' when 5 then 'f' end as name from tst s(index);\nCREATE INDEX\n\n\npostgres=# postgres=# select ev_class::regclass::text,\noctet_length(ev_action), pg_column_size(ev_action) from pg_rewrite\nwhere ev_class in ('arrayliteral_view'::regclass::oid,\n'case_view'::regclass::oid, 'split_stringcast_view'::regclass::oid,\n'stringcast_view'::regclass::oid);\n ev_class | octet_length | pg_column_size\n-----------------------+--------------+----------------\n arrayliteral_view | 3311 | 1322\n stringcast_view | 2610 | 1257\n case_view | 5170 | 1412\n split_stringcast_view | 2847 | 1350\n\nIt seems to me that we could consider replacing the CASE statements\nwith array literals and lookups if we really value our template\ndatabase size. But, as text literal concatenations don't seem to get\nconstant folded before storing them in the rules table, this rewrite\nof the views would result in long lines in the system_views.sql file,\nor we'd have to deal with the additional overhead of the append\noperator and cast nodes.\n\n# Future work; nodeToString / readNode, all rewrite rules\n\nAdditionally, we might want to consider other changes like default (or\nempty value) elision in nodeToString, if that is considered a\nreasonable option and if we really want to reduce the size of the\npg_rewrite table.\n\nI think a lot of space can be recovered from that: A manual removal of\nwhat seemed to be fields with default values (and the removal of all\nquery location related fields) in the current definition of\npg_stat_progress_create_index reduces its uncompressed size from\n23226B raw and 4204B compressed to 13821B raw and 2784B compressed,\nfor an on-disk space saving of 33% for this view's ev_action.\n\nDo note, however, that that would add significant branching in the\nnodeToString and readNode code, which might slow down that code\nsignificantly. I'm not planning on working on that; but in my opinion\nthat is a viable path to reducing the size of new database catalogs.\n\n\n-Matthias\n\nPS. attached patch is not to be considered complete - it is a minimal\nexample of the array literal form. It fails regression tests because I\ndidn't bother updating or including the regression tests on system\nviews.", "msg_date": "Fri, 8 Apr 2022 16:52:07 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Size of pg_rewrite (Was: Report checkpoint progress with\n pg_stat_progress_checkpoint)" }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n\n> But, as text literal concatenations don't seem to get constant folded\n> before storing them in the rules table, this rewrite of the views\n> would result in long lines in the system_views.sql file, or we'd have\n> to deal with the additional overhead of the append operator and cast\n> nodes.\n\nThere is no need to use the concatenation operator to split array\nconstants across multiple lines. Newlines are fine either inside the\nstring (between array elements), or between two string string literals\n(which become one string constant at parse time).\n\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS\n\nilmari@[local]:5432 ~=# select\nilmari@[local]:5432 ~-# '{foo,\nilmari@[local]:5432 ~'# bar}'::text[],\nilmari@[local]:5432 ~-# '{bar,'\nilmari@[local]:5432 ~-# 'baz}'::text[];\n┌───────────┬───────────┐\n│ text │ text │\n├───────────┼───────────┤\n│ {foo,bar} │ {bar,baz} │\n└───────────┴───────────┘\n(1 row)\n\n\n- ilmari\n\n\n", "msg_date": "Fri, 08 Apr 2022 16:20:36 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Size of pg_rewrite" }, { "msg_contents": "On Fri, 8 Apr 2022 at 17:20, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n>\n> > But, as text literal concatenations don't seem to get constant folded\n> > before storing them in the rules table, this rewrite of the views\n> > would result in long lines in the system_views.sql file, or we'd have\n> > to deal with the additional overhead of the append operator and cast\n> > nodes.\n>\n> There is no need to use the concatenation operator to split array\n> constants across multiple lines. Newlines are fine either inside the\n> string (between array elements), or between two string string literals\n> (which become one string constant at parse time).\n\nAh, neat, that saves some long lines in the system_views file. I had\nalready tried the \"auto-concatenate two consecutive string literals\",\nbut that try failed in initdb, so now I'm not sure what happened\nthere.\n\nThanks!\n\n-Matthias\n\n\n", "msg_date": "Fri, 8 Apr 2022 17:30:02 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Size of pg_rewrite" }, { "msg_contents": "Hi, \n\nOn April 8, 2022 7:52:07 AM PDT, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>On Sat, 19 Mar 2022 at 01:15, Andres Freund <andres@anarazel.de> wrote:\n>> pg_rewrite without pg_stat_progress_checkpoint: 745472, with: 753664\n>>\n>> pg_rewrite is the second biggest relation in an empty database already...\n>\n>Yeah, that's not great. Thanks for nerd-sniping me into looking into\n>how views and pg_rewrite rules work, that was very interesting and I\n>learned quite a lot.\n\nThanks for looking!\n\n\n># Immediately potential, limited to progress views\n>\n>I noticed that the CASE-WHEN (used in translating progress stage index\n>to stage names) in those progress reporting views can be more\n>efficiently described (althoug with slightly worse behaviour around\n>undefined values) using text array lookups (as attached). That\n>resulted in somewhat smaller rewrite entries for the progress views\n>(toast compression was good old pglz):\n>\n>template1=# SELECT sum(octet_length(ev_action)),\n>SUM(pg_column_size(ev_action)) FROM pg_rewrite WHERE\n>ev_class::regclass::text LIKE '%progress%';\n>\n>master:\n> sum | sum\n>-------+-------\n> 97277 | 19956\n>patched:\n> sum | sum\n>-------+-------\n> 77069 | 18417\n>\n>So this seems like a nice improvement of 20% uncompressed / 7% compressed.\n>\n>I tested various cases of phase number to text translations: `CASE ..\n>WHEN`; `(ARRAY[]::text[])[index]` and `('{}'::text[])[index]`. See\n>results below:\n>\n>postgres=# create or replace view arrayliteral_view as select\n>(ARRAY['a','b','c','d','e','f']::text[])[index] as name from tst\n>s(index);\n>CREATE INDEX\n>postgres=# create or replace view stringcast_view as select\n>('{a,b,c,d,e,f}'::text[])[index] as name from tst s(index);\n>CREATE INDEX\n>postgres=# create or replace view split_stringcast_view as select\n>(('{a,b,' || 'c,d,e,f}')::text[])[index] as name from tst s(index);\n>CREATE VIEW\n>postgres=# create or replace view case_view as select case index when\n>0 then 'a' when 1 then 'b' when 2 then 'c' when 3 then 'd' when 4 then\n>'e' when 5 then 'f' end as name from tst s(index);\n>CREATE INDEX\n>\n>\n>postgres=# postgres=# select ev_class::regclass::text,\n>octet_length(ev_action), pg_column_size(ev_action) from pg_rewrite\n>where ev_class in ('arrayliteral_view'::regclass::oid,\n>'case_view'::regclass::oid, 'split_stringcast_view'::regclass::oid,\n>'stringcast_view'::regclass::oid);\n> ev_class | octet_length | pg_column_size\n>-----------------------+--------------+----------------\n> arrayliteral_view | 3311 | 1322\n> stringcast_view | 2610 | 1257\n> case_view | 5170 | 1412\n> split_stringcast_view | 2847 | 1350\n>\n>It seems to me that we could consider replacing the CASE statements\n>with array literals and lookups if we really value our template\n>database size. But, as text literal concatenations don't seem to get\n>constant folded before storing them in the rules table, this rewrite\n>of the views would result in long lines in the system_views.sql file,\n>or we'd have to deal with the additional overhead of the append\n>operator and cast nodes.\n\nMy inclination is that the mapping functions should be c functions. There's really no point in doing it in SQL and it comes at a noticable price. And, if done in C, we can fix mistakes in minor releases, which we can't in SQL.\n\n\n># Future work; nodeToString / readNode, all rewrite rules\n>\n>Additionally, we might want to consider other changes like default (or\n>empty value) elision in nodeToString, if that is considered a\n>reasonable option and if we really want to reduce the size of the\n>pg_rewrite table.\n>\n>I think a lot of space can be recovered from that: A manual removal of\n>what seemed to be fields with default values (and the removal of all\n>query location related fields) in the current definition of\n>pg_stat_progress_create_index reduces its uncompressed size from\n>23226B raw and 4204B compressed to 13821B raw and 2784B compressed,\n>for an on-disk space saving of 33% for this view's ev_action.\n>\n>Do note, however, that that would add significant branching in the\n>nodeToString and readNode code, which might slow down that code\n>significantly. I'm not planning on working on that; but in my opinion\n>that is a viable path to reducing the size of new database catalogs.\n\nWe should definitely be careful about that. I do agree that there's a lot of efficiency to be gained in the serialization format. Once we have the automatic node func generation in place, we could have one representation for human consumption, and one for density...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 08 Apr 2022 09:09:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_Size_of_pg=5Frewrite_=28Was=3A_Report_checkpoi?=\n =?US-ASCII?Q?nt_progress_with_pg=5Fstat=5Fprogress=5Fcheckpoint=29?=" }, { "msg_contents": "Hi,\n\nHere is the update patch which fixes the previous comments discussed\nin this thread. I am sorry for the long gap in the discussion. Kindly\nlet me know if I have missed any of the comments or anything new.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Mar 18, 2022 at 4:52 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > > > I don't get it. The checkpoint flags and the view flags (set by\n> > > > pgstat_progrss_update*) are different, so why can't we add this flag to the\n> > > > view flags? The fact that checkpointer.c doesn't update the passed flag and\n> > > > instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> > > > an implementation detail, and the view shouldn't focus on which flags were\n> > > > initially passed to the checkpointer but instead which flags the checkpointer\n> > > > is actually enforcing, as that's what the user should be interested in. If you\n> > > > want to store it in another field internally but display it in the view with\n> > > > the rest of the flags, I'm fine with it.\n> > >\n> > > Just to be in sync with the way code behaves, it is better not to\n> > > update the next checkpoint request's CHECKPOINT_IMMEDIATE with the\n> > > current checkpoint 'flags' field. Because the current checkpoint\n> > > starts with a different set of flags and when there is a new request\n> > > (with CHECKPOINT_IMMEDIATE), it just processes the pending operations\n> > > quickly to take up next requests. If we update this information in the\n> > > 'flags' field of the view, it says that the current checkpoint is\n> > > started with CHECKPOINT_IMMEDIATE which is not true.\n> >\n> > Which is why I suggested to only take into account CHECKPOINT_REQUESTED (to\n> > be able to display that a new checkpoint was requested)\n>\n> I will take care in the next patch.\n>\n> > > Hence I had\n> > > thought of adding a new field ('next flags' or 'upcoming flags') which\n> > > contain all the flag values of new checkpoint requests. This field\n> > > indicates whether the current checkpoint is throttled or not and also\n> > > it indicates there are new requests.\n> >\n> > I'm not opposed to having such a field, I'm opposed to having a view with \"the\n> > current checkpoint is throttled but if there are some flags in the next\n> > checkpoint flags and those flags contain checkpoint immediate then the current\n> > checkpoint isn't actually throttled anymore\" behavior.\n>\n> I understand your point and I also agree that it becomes difficult for\n> the user to understand the context.\n>\n> > and\n> > CHECKPOINT_IMMEDIATE, to be able to display that the current checkpoint isn't\n> > throttled anymore if it were.\n> >\n> > I still don't understand why you want so much to display \"how the checkpoint\n> > was initially started\" rather than \"how the checkpoint is really behaving right\n> > now\". The whole point of having a progress view is to have something dynamic\n> > that reflects the current activity.\n>\n> As of now I will not consider adding this information to the view. If\n> required and nobody opposes having this included in the 'flags' field\n> of the view, then I will consider adding.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Mon, Mar 14, 2022 at 5:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Mar 14, 2022 at 03:16:50PM +0530, Nitin Jadhav wrote:\n> > > > > I am not suggesting\n> > > > > removing the existing 'flags' field of pg_stat_progress_checkpoint\n> > > > > view and adding a new field 'throttled'. The content of the 'flags'\n> > > > > field remains the same. I was suggesting replacing the 'next_flags'\n> > > > > field with 'throttled' field since the new request with\n> > > > > CHECKPOINT_IMMEDIATE flag enabled will affect the current checkpoint.\n> > > >\n> > > > Are you saying that this new throttled flag will only be set by the overloaded\n> > > > flags in ckpt_flags?\n> > >\n> > > Yes. you are right.\n> > >\n> > > > So you can have a checkpoint with a CHECKPOINT_IMMEDIATE\n> > > > flags that's throttled, and a checkpoint without the CHECKPOINT_IMMEDIATE flag\n> > > > that's not throttled?\n> > >\n> > > I think it's the reverse. A checkpoint with a CHECKPOINT_IMMEDIATE\n> > > flags that's not throttled (disables delays between writes) and a\n> > > checkpoint without the CHECKPOINT_IMMEDIATE flag that's throttled\n> > > (enables delays between writes)\n> >\n> > Yes that's how it's supposed to work, but my point was that your suggested\n> > 'throttled' flag could say the opposite, which is bad.\n> >\n> > > > I don't get it. The checkpoint flags and the view flags (set by\n> > > > pgstat_progrss_update*) are different, so why can't we add this flag to the\n> > > > view flags? The fact that checkpointer.c doesn't update the passed flag and\n> > > > instead look in the shmem to see if CHECKPOINT_IMMEDIATE has been set since is\n> > > > an implementation detail, and the view shouldn't focus on which flags were\n> > > > initially passed to the checkpointer but instead which flags the checkpointer\n> > > > is actually enforcing, as that's what the user should be interested in. If you\n> > > > want to store it in another field internally but display it in the view with\n> > > > the rest of the flags, I'm fine with it.\n> > >\n> > > Just to be in sync with the way code behaves, it is better not to\n> > > update the next checkpoint request's CHECKPOINT_IMMEDIATE with the\n> > > current checkpoint 'flags' field. Because the current checkpoint\n> > > starts with a different set of flags and when there is a new request\n> > > (with CHECKPOINT_IMMEDIATE), it just processes the pending operations\n> > > quickly to take up next requests. If we update this information in the\n> > > 'flags' field of the view, it says that the current checkpoint is\n> > > started with CHECKPOINT_IMMEDIATE which is not true.\n> >\n> > Which is why I suggested to only take into account CHECKPOINT_REQUESTED (to\n> > be able to display that a new checkpoint was requested) and\n> > CHECKPOINT_IMMEDIATE, to be able to display that the current checkpoint isn't\n> > throttled anymore if it were.\n> >\n> > I still don't understand why you want so much to display \"how the checkpoint\n> > was initially started\" rather than \"how the checkpoint is really behaving right\n> > now\". The whole point of having a progress view is to have something dynamic\n> > that reflects the current activity.\n> >\n> > > Hence I had\n> > > thought of adding a new field ('next flags' or 'upcoming flags') which\n> > > contain all the flag values of new checkpoint requests. This field\n> > > indicates whether the current checkpoint is throttled or not and also\n> > > it indicates there are new requests.\n> >\n> > I'm not opposed to having such a field, I'm opposed to having a view with \"the\n> > current checkpoint is throttled but if there are some flags in the next\n> > checkpoint flags and those flags contain checkpoint immediate then the current\n> > checkpoint isn't actually throttled anymore\" behavior.", "msg_date": "Mon, 6 Jun 2022 11:33:50 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> Have you measured the performance effects of this? On fast storage with large\n> shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> be good to verify that.\n\nTo understand the performance effects of the above, I have taken the\naverage of five checkpoints with the patch and without the patch in my\nenvironment. Here are the results.\nWith patch: 269.65 s\nWithout patch: 269.60 s\n\nIt looks fine. Please share your views.\n\n> This view is depressingly complicated. Added up the view definitions for\n> the already existing pg_stat_progress* views add up to a measurable part of\n> the size of an empty database:\n\nThank you so much for sharing the detailed analysis. We can remove a\nfew fields which are not so important to make it simple.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Sat, Mar 19, 2022 at 5:45 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> This is a long thread, sorry for asking if this has been asked before.\n>\n> On 2022-03-08 20:25:28 +0530, Nitin Jadhav wrote:\n> > * Sort buffers that need to be written to reduce the likelihood of random\n> > @@ -2129,6 +2132,8 @@ BufferSync(int flags)\n> > bufHdr = GetBufferDescriptor(buf_id);\n> >\n> > num_processed++;\n> > + pgstat_progress_update_param(PROGRESS_CHECKPOINT_BUFFERS_PROCESSED,\n> > + num_processed);\n> >\n> > /*\n> > * We don't need to acquire the lock here, because we're only looking\n> > @@ -2149,6 +2154,8 @@ BufferSync(int flags)\n> > TRACE_POSTGRESQL_BUFFER_SYNC_WRITTEN(buf_id);\n> > PendingCheckpointerStats.m_buf_written_checkpoints++;\n> > num_written++;\n> > + pgstat_progress_update_param(PROGRESS_CHECKPOINT_BUFFERS_WRITTEN,\n> > + num_written);\n> > }\n> > }\n>\n> Have you measured the performance effects of this? On fast storage with large\n> shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> be good to verify that.\n>\n>\n> > @@ -1897,6 +1897,112 @@ pg_stat_progress_basebackup| SELECT s.pid,\n> > s.param4 AS tablespaces_total,\n> > s.param5 AS tablespaces_streamed\n> > FROM pg_stat_get_progress_info('BASEBACKUP'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20);\n> > +pg_stat_progress_checkpoint| SELECT s.pid,\n> > + CASE s.param1\n> > + WHEN 1 THEN 'checkpoint'::text\n> > + WHEN 2 THEN 'restartpoint'::text\n> > + ELSE NULL::text\n> > + END AS type,\n> > + (((((((\n> > + CASE\n> > + WHEN ((s.param2 & (1)::bigint) > 0) THEN 'shutdown '::text\n> > + ELSE ''::text\n> > + END ||\n> > + CASE\n> > + WHEN ((s.param2 & (2)::bigint) > 0) THEN 'end-of-recovery '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (4)::bigint) > 0) THEN 'immediate '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (8)::bigint) > 0) THEN 'force '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (16)::bigint) > 0) THEN 'flush-all '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (32)::bigint) > 0) THEN 'wait '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (128)::bigint) > 0) THEN 'wal '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param2 & (256)::bigint) > 0) THEN 'time '::text\n> > + ELSE ''::text\n> > + END) AS flags,\n> > + (((((((\n> > + CASE\n> > + WHEN ((s.param3 & (1)::bigint) > 0) THEN 'shutdown '::text\n> > + ELSE ''::text\n> > + END ||\n> > + CASE\n> > + WHEN ((s.param3 & (2)::bigint) > 0) THEN 'end-of-recovery '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (4)::bigint) > 0) THEN 'immediate '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (8)::bigint) > 0) THEN 'force '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (16)::bigint) > 0) THEN 'flush-all '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (32)::bigint) > 0) THEN 'wait '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (128)::bigint) > 0) THEN 'wal '::text\n> > + ELSE ''::text\n> > + END) ||\n> > + CASE\n> > + WHEN ((s.param3 & (256)::bigint) > 0) THEN 'time '::text\n> > + ELSE ''::text\n> > + END) AS next_flags,\n> > + ('0/0'::pg_lsn + (\n> > + CASE\n> > + WHEN (s.param4 < 0) THEN pow((2)::numeric, (64)::numeric)\n> > + ELSE (0)::numeric\n> > + END + (s.param4)::numeric)) AS start_lsn,\n> > + to_timestamp(((946684800)::double precision + ((s.param5)::double precision / (1000000)::double precision))) AS start_time,\n> > + CASE s.param6\n> > + WHEN 1 THEN 'initializing'::text\n> > + WHEN 2 THEN 'getting virtual transaction IDs'::text\n> > + WHEN 3 THEN 'checkpointing replication slots'::text\n> > + WHEN 4 THEN 'checkpointing logical replication snapshot files'::text\n> > + WHEN 5 THEN 'checkpointing logical rewrite mapping files'::text\n> > + WHEN 6 THEN 'checkpointing replication origin'::text\n> > + WHEN 7 THEN 'checkpointing commit log pages'::text\n> > + WHEN 8 THEN 'checkpointing commit time stamp pages'::text\n> > + WHEN 9 THEN 'checkpointing subtransaction pages'::text\n> > + WHEN 10 THEN 'checkpointing multixact pages'::text\n> > + WHEN 11 THEN 'checkpointing predicate lock pages'::text\n> > + WHEN 12 THEN 'checkpointing buffers'::text\n> > + WHEN 13 THEN 'processing file sync requests'::text\n> > + WHEN 14 THEN 'performing two phase checkpoint'::text\n> > + WHEN 15 THEN 'performing post checkpoint cleanup'::text\n> > + WHEN 16 THEN 'invalidating replication slots'::text\n> > + WHEN 17 THEN 'recycling old WAL files'::text\n> > + WHEN 18 THEN 'truncating subtransactions'::text\n> > + WHEN 19 THEN 'finalizing'::text\n> > + ELSE NULL::text\n> > + END AS phase,\n> > + s.param7 AS buffers_total,\n> > + s.param8 AS buffers_processed,\n> > + s.param9 AS buffers_written,\n> > + s.param10 AS files_total,\n> > + s.param11 AS files_synced\n> > + FROM pg_stat_get_progress_info('CHECKPOINT'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20);\n> > pg_stat_progress_cluster| SELECT s.pid,\n> > s.datid,\n> > d.datname,\n>\n> This view is depressingly complicated. Added up the view definitions for\n> the already existing pg_stat_progress* views add up to a measurable part of\n> the size of an empty database:\n>\n> postgres[1160866][1]=# SELECT sum(octet_length(ev_action)), SUM(pg_column_size(ev_action)) FROM pg_rewrite WHERE ev_class::regclass::text LIKE '%progress%';\n> ┌───────┬───────┐\n> │ sum │ sum │\n> ├───────┼───────┤\n> │ 97410 │ 19786 │\n> └───────┴───────┘\n> (1 row)\n>\n> and this view looks to be a good bit more complicated than the existing\n> pg_stat_progress* views.\n>\n> Indeed:\n> template1[1165473][1]=# SELECT ev_class::regclass, length(ev_action), pg_column_size(ev_action) FROM pg_rewrite WHERE ev_class::regclass::text LIKE '%progress%' ORDER BY length(ev_action) DESC;\n> ┌───────────────────────────────┬────────┬────────────────┐\n> │ ev_class │ length │ pg_column_size │\n> ├───────────────────────────────┼────────┼────────────────┤\n> │ pg_stat_progress_checkpoint │ 43290 │ 5409 │\n> │ pg_stat_progress_create_index │ 23293 │ 4177 │\n> │ pg_stat_progress_cluster │ 18390 │ 3704 │\n> │ pg_stat_progress_analyze │ 16121 │ 3339 │\n> │ pg_stat_progress_vacuum │ 16076 │ 3392 │\n> │ pg_stat_progress_copy │ 15124 │ 3080 │\n> │ pg_stat_progress_basebackup │ 8406 │ 2094 │\n> └───────────────────────────────┴────────┴────────────────┘\n> (7 rows)\n>\n> pg_rewrite without pg_stat_progress_checkpoint: 745472, with: 753664\n>\n>\n> pg_rewrite is the second biggest relation in an empty database already...\n>\n> template1[1164827][1]=# SELECT relname, pg_total_relation_size(oid) FROM pg_class WHERE relkind = 'r' ORDER BY 2 DESC LIMIT 5;\n> ┌────────────────┬────────────────────────┐\n> │ relname │ pg_total_relation_size │\n> ├────────────────┼────────────────────────┤\n> │ pg_proc │ 1212416 │\n> │ pg_rewrite │ 745472 │\n> │ pg_attribute │ 704512 │\n> │ pg_description │ 630784 │\n> │ pg_collation │ 409600 │\n> └────────────────┴────────────────────────┘\n> (5 rows)\n>\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Mon, 13 Jun 2022 19:08:35 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> > Have you measured the performance effects of this? On fast storage with large\n> > shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> > be good to verify that.\n>\n> I am wondering if we could make the function inlined at some point.\n> We could also play it safe and only update the counters every N loops\n> instead.\n\nThe idea looks good but based on the performance numbers shared above,\nit is not affecting the performance. So we can use the current\napproach as it gives more accurate progress.\n---\n\n> > This view is depressingly complicated. Added up the view definitions for\n> > the already existing pg_stat_progress* views add up to a measurable part of\n> > the size of an empty database:\n>\n> Yeah. I think that what's proposed could be simplified, and we had\n> better remove the fields that are not that useful. First, do we have\n> any need for next_flags?\n\n\"next_flags\" is removed in the v6 patch. Added a \"new_requests\" field\nto get to know whether the current checkpoint is being throttled or\nnot. Please share your views on this.\n---\n\n> Second, is the start LSN really necessary\n> for monitoring purposes?\n\nIMO, start LSN is necessary to debug if the checkpoint is taking longer.\n---\n\n> Not all the information in the first\n> parameter is useful, as well. For example \"shutdown\" will never be\n> seen as it is not possible to use a session at this stage, no?\n\nI understand that \"shutdown\" and \"end-of-recovery\" will never be seen\nand I have removed it in the v6 patch.\n---\n\n> There\n> is also no gain in having \"immediate\", \"flush-all\", \"force\" and \"wait\"\n> (for this one if the checkpoint is requested the session doing the\n> work knows this information already).\n\n\"immediate\" is required to understand whether the current checkpoint\nis throttled or not. I am not sure about other flags \"flush-all\",\n\"force\" and \"wait\". I have just supported all the flags to match the\n'checkpoint start' log message. Please share your views. If it is not\nreally required, I will remove it in the next patch.\n---\n\n> A last thing is that we may gain in visibility by having more\n> attributes as an effect of splitting param2. On thing that would make\n> sense is to track the reason why the checkpoint was triggered\n> separately (aka wal and time). Should we use a text[] instead to list\n> all the parameters instead? Using a space-separated list of items is\n> not intuitive IMO, and callers of this routine will likely parse\n> that.\n\nIf I understand the above comment correctly, you are saying to\nintroduce a new field, say \"reason\" ( possible values are either wal\nor time) and the \"flags\" field will continue to represent the other\nflags like \"immediate\", etc. The idea looks good here. We can\nintroduce new field \"reason\" and \"flags\" field can be renamed to\n\"throttled\" (true/false) if we decide to not support other flags\n\"flush-all\", \"force\" and \"wait\".\n---\n\n> + WHEN 3 THEN 'checkpointing replication slots'\n> + WHEN 4 THEN 'checkpointing logical replication snapshot files'\n> + WHEN 5 THEN 'checkpointing logical rewrite mapping files'\n> + WHEN 6 THEN 'checkpointing replication origin'\n> + WHEN 7 THEN 'checkpointing commit log pages'\n> + WHEN 8 THEN 'checkpointing commit time stamp pages'\n> There is a lot of \"checkpointing\" here. All those terms could be\n> shorter without losing their meaning.\n\nI will try to make it short in the next patch.\n---\n\nPlease share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Apr 5, 2022 at 3:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 18, 2022 at 05:15:56PM -0700, Andres Freund wrote:\n> > Have you measured the performance effects of this? On fast storage with large\n> > shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> > be good to verify that.\n>\n> I am wondering if we could make the function inlined at some point.\n> We could also play it safe and only update the counters every N loops\n> instead.\n>\n> > This view is depressingly complicated. Added up the view definitions for\n> > the already existing pg_stat_progress* views add up to a measurable part of\n> > the size of an empty database:\n>\n> Yeah. I think that what's proposed could be simplified, and we had\n> better remove the fields that are not that useful. First, do we have\n> any need for next_flags? Second, is the start LSN really necessary\n> for monitoring purposes? Not all the information in the first\n> parameter is useful, as well. For example \"shutdown\" will never be\n> seen as it is not possible to use a session at this stage, no? There\n> is also no gain in having \"immediate\", \"flush-all\", \"force\" and \"wait\"\n> (for this one if the checkpoint is requested the session doing the\n> work knows this information already).\n>\n> A last thing is that we may gain in visibility by having more\n> attributes as an effect of splitting param2. On thing that would make\n> sense is to track the reason why the checkpoint was triggered\n> separately (aka wal and time). Should we use a text[] instead to list\n> all the parameters instead? Using a space-separated list of items is\n> not intuitive IMO, and callers of this routine will likely parse\n> that.\n>\n> Shouldn't we also track the number of files flushed in each sub-step?\n> In some deployments you could have a large number of 2PC files and\n> such. We may want more information on such matters.\n>\n> + WHEN 3 THEN 'checkpointing replication slots'\n> + WHEN 4 THEN 'checkpointing logical replication snapshot files'\n> + WHEN 5 THEN 'checkpointing logical rewrite mapping files'\n> + WHEN 6 THEN 'checkpointing replication origin'\n> + WHEN 7 THEN 'checkpointing commit log pages'\n> + WHEN 8 THEN 'checkpointing commit time stamp pages'\n> There is a lot of \"checkpointing\" here. All those terms could be\n> shorter without losing their meaning.\n>\n> This patch still needs some work, so I am marking it as RwF for now.\n> --\n> Michael\n\n\n", "msg_date": "Mon, 13 Jun 2022 19:26:39 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-06-13 19:08:35 +0530, Nitin Jadhav wrote:\n> > Have you measured the performance effects of this? On fast storage with large\n> > shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> > be good to verify that.\n> \n> To understand the performance effects of the above, I have taken the\n> average of five checkpoints with the patch and without the patch in my\n> environment. Here are the results.\n> With patch: 269.65 s\n> Without patch: 269.60 s\n\nThose look like timed checkpoints - if the checkpoints are sleeping a\npart of the time, you're not going to see any potential overhead.\n\nTo see whether this has an effect you'd have to make sure there's a\ncertain number of dirty buffers (e.g. by doing CREATE TABLE AS\nsome_query) and then do a manual checkpoint and time how long that\ntimes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 6 Jul 2022 17:04:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "> > To understand the performance effects of the above, I have taken the\n> > average of five checkpoints with the patch and without the patch in my\n> > environment. Here are the results.\n> > With patch: 269.65 s\n> > Without patch: 269.60 s\n>\n> Those look like timed checkpoints - if the checkpoints are sleeping a\n> part of the time, you're not going to see any potential overhead.\n\nYes. The above data is collected from timed checkpoints.\n\ncreate table t1(a int);\ninsert into t1 select * from generate_series(1,10000000);\n\nI generated a lot of data by using the above queries which would in\nturn trigger the checkpoint (wal).\n---\n\n> To see whether this has an effect you'd have to make sure there's a\n> certain number of dirty buffers (e.g. by doing CREATE TABLE AS\n> some_query) and then do a manual checkpoint and time how long that\n> times.\n\nFor this case I have generated data by using below queries.\n\ncreate table t1(a int);\ninsert into t1 select * from generate_series(1,8000000);\n\nThis does not trigger the checkpoint automatically. I have issued the\nCHECKPOINT manually and measured the performance by considering an\naverage of 5 checkpoints. Here are the details.\n\nWith patch: 2.457 s\nWithout patch: 2.334 s\n\nPlease share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jul 7, 2022 at 5:34 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-06-13 19:08:35 +0530, Nitin Jadhav wrote:\n> > > Have you measured the performance effects of this? On fast storage with large\n> > > shared_buffers I've seen these loops in profiles. It's probably fine, but it'd\n> > > be good to verify that.\n> >\n> > To understand the performance effects of the above, I have taken the\n> > average of five checkpoints with the patch and without the patch in my\n> > environment. Here are the results.\n> > With patch: 269.65 s\n> > Without patch: 269.60 s\n>\n> Those look like timed checkpoints - if the checkpoints are sleeping a\n> part of the time, you're not going to see any potential overhead.\n>\n> To see whether this has an effect you'd have to make sure there's a\n> certain number of dirty buffers (e.g. by doing CREATE TABLE AS\n> some_query) and then do a manual checkpoint and time how long that\n> times.\n>\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Thu, 28 Jul 2022 15:08:38 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 7/28/22 11:38 AM, Nitin Jadhav wrote:\n>>> To understand the performance effects of the above, I have taken the\n>>> average of five checkpoints with the patch and without the patch in my\n>>> environment. Here are the results.\n>>> With patch: 269.65 s\n>>> Without patch: 269.60 s\n>>\n>> Those look like timed checkpoints - if the checkpoints are sleeping a\n>> part of the time, you're not going to see any potential overhead.\n> \n> Yes. The above data is collected from timed checkpoints.\n> \n> create table t1(a int);\n> insert into t1 select * from generate_series(1,10000000);\n> \n> I generated a lot of data by using the above queries which would in\n> turn trigger the checkpoint (wal).\n> ---\n> \n>> To see whether this has an effect you'd have to make sure there's a\n>> certain number of dirty buffers (e.g. by doing CREATE TABLE AS\n>> some_query) and then do a manual checkpoint and time how long that\n>> times.\n> \n> For this case I have generated data by using below queries.\n> \n> create table t1(a int);\n> insert into t1 select * from generate_series(1,8000000);\n> \n> This does not trigger the checkpoint automatically. I have issued the\n> CHECKPOINT manually and measured the performance by considering an\n> average of 5 checkpoints. Here are the details.\n> \n> With patch: 2.457 s\n> Without patch: 2.334 s\n> \n> Please share your thoughts.\n> \n\nv6 was not applying anymore, due to a change in \ndoc/src/sgml/ref/checkpoint.sgml done by b9eb0ff09e (Rename \npg_checkpointer predefined role to pg_checkpoint).\n\nPlease find attached a rebase in v7.\n\nWhile working on this rebase, I also noticed that \"pg_checkpointer\" is \nstill mentioned in some translation files:\n\"\n$ git grep pg_checkpointer\nsrc/backend/po/de.po:msgid \"must be superuser or have privileges of \npg_checkpointer to do CHECKPOINT\"\nsrc/backend/po/ja.po:msgid \"must be superuser or have privileges of \npg_checkpointer to do CHECKPOINT\"\nsrc/backend/po/ja.po:msgstr \n\"CHECKPOINTを実行するにはスーパーユーザーであるか、またはpg_checkpointerの権限を持つ必要があります\"\nsrc/backend/po/sv.po:msgid \"must be superuser or have privileges of \npg_checkpointer to do CHECKPOINT\"\n\"\n\nI'm not familiar with how the translation files are handled (looks like \nthey have their own set of commits, see 3c0bcdbc66 for example) but \nwanted to mention that \"pg_checkpointer\" is still mentioned (even if \nthat may be expected as the last commit related to translation files \n(aka 3c0bcdbc66) is older than the one that renamed pg_checkpointer to \npg_checkpoint (aka b9eb0ff09e)).\n\nThat said, back to this patch: I did not look closely but noticed that \nthe buffers_total reported by pg_stat_progress_checkpoint:\n\npostgres=# select type,flags,start_lsn,phase,buffers_total,new_requests \nfrom pg_stat_progress_checkpoint;\n type | flags | start_lsn | phase \n | buffers_total | new_requests\n------------+-----------------------+------------+-----------------------+---------------+--------------\n checkpoint | immediate force wait | 1/E6C523A8 | checkpointing \nbuffers | 1024275 | false\n(1 row)\n\nis a little bit different from what is logged once completed:\n\n2022-11-04 08:18:50.806 UTC [3488442] LOG: checkpoint complete: wrote \n1024278 buffers (97.7%);\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 4 Nov 2022 09:25:52 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "> v6 was not applying anymore, due to a change in\n> doc/src/sgml/ref/checkpoint.sgml done by b9eb0ff09e (Rename\n> pg_checkpointer predefined role to pg_checkpoint).\n>\n> Please find attached a rebase in v7.\n>\n> While working on this rebase, I also noticed that \"pg_checkpointer\" is\n> still mentioned in some translation files:\n\nThanks for rebasing the patch and sharing the information.\n---\n\n> That said, back to this patch: I did not look closely but noticed that\n> the buffers_total reported by pg_stat_progress_checkpoint:\n>\n> postgres=# select type,flags,start_lsn,phase,buffers_total,new_requests\n> from pg_stat_progress_checkpoint;\n> type | flags | start_lsn | phase\n> | buffers_total | new_requests\n> ------------+-----------------------+------------+-----------------------+---------------+--------------\n> checkpoint | immediate force wait | 1/E6C523A8 | checkpointing\n> buffers | 1024275 | false\n> (1 row)\n>\n> is a little bit different from what is logged once completed:\n>\n> 2022-11-04 08:18:50.806 UTC [3488442] LOG: checkpoint complete: wrote\n> 1024278 buffers (97.7%);\n\nThis is because the count shown in the checkpoint complete message\nincludes the additional increment done during SlruInternalWritePage().\nWe are not sure of this increment until it really happens. Hence it\nwas not considered in the patch. To make it compatible with the\ncheckpoint complete message, we should increment all three here,\nbuffers_total, buffers_processed and buffers_written. So the total\nnumber of buffers calculated earlier may not always be the same. If\nthis looks good, I will update this in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Nov 4, 2022 at 1:57 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 7/28/22 11:38 AM, Nitin Jadhav wrote:\n> >>> To understand the performance effects of the above, I have taken the\n> >>> average of five checkpoints with the patch and without the patch in my\n> >>> environment. Here are the results.\n> >>> With patch: 269.65 s\n> >>> Without patch: 269.60 s\n> >>\n> >> Those look like timed checkpoints - if the checkpoints are sleeping a\n> >> part of the time, you're not going to see any potential overhead.\n> >\n> > Yes. The above data is collected from timed checkpoints.\n> >\n> > create table t1(a int);\n> > insert into t1 select * from generate_series(1,10000000);\n> >\n> > I generated a lot of data by using the above queries which would in\n> > turn trigger the checkpoint (wal).\n> > ---\n> >\n> >> To see whether this has an effect you'd have to make sure there's a\n> >> certain number of dirty buffers (e.g. by doing CREATE TABLE AS\n> >> some_query) and then do a manual checkpoint and time how long that\n> >> times.\n> >\n> > For this case I have generated data by using below queries.\n> >\n> > create table t1(a int);\n> > insert into t1 select * from generate_series(1,8000000);\n> >\n> > This does not trigger the checkpoint automatically. I have issued the\n> > CHECKPOINT manually and measured the performance by considering an\n> > average of 5 checkpoints. Here are the details.\n> >\n> > With patch: 2.457 s\n> > Without patch: 2.334 s\n> >\n> > Please share your thoughts.\n> >\n>\n> v6 was not applying anymore, due to a change in\n> doc/src/sgml/ref/checkpoint.sgml done by b9eb0ff09e (Rename\n> pg_checkpointer predefined role to pg_checkpoint).\n>\n> Please find attached a rebase in v7.\n>\n> While working on this rebase, I also noticed that \"pg_checkpointer\" is\n> still mentioned in some translation files:\n> \"\n> $ git grep pg_checkpointer\n> src/backend/po/de.po:msgid \"must be superuser or have privileges of\n> pg_checkpointer to do CHECKPOINT\"\n> src/backend/po/ja.po:msgid \"must be superuser or have privileges of\n> pg_checkpointer to do CHECKPOINT\"\n> src/backend/po/ja.po:msgstr\n> \"CHECKPOINTを実行するにはスーパーユーザーであるか、またはpg_checkpointerの権限を持つ必要があります\"\n> src/backend/po/sv.po:msgid \"must be superuser or have privileges of\n> pg_checkpointer to do CHECKPOINT\"\n> \"\n>\n> I'm not familiar with how the translation files are handled (looks like\n> they have their own set of commits, see 3c0bcdbc66 for example) but\n> wanted to mention that \"pg_checkpointer\" is still mentioned (even if\n> that may be expected as the last commit related to translation files\n> (aka 3c0bcdbc66) is older than the one that renamed pg_checkpointer to\n> pg_checkpoint (aka b9eb0ff09e)).\n>\n> That said, back to this patch: I did not look closely but noticed that\n> the buffers_total reported by pg_stat_progress_checkpoint:\n>\n> postgres=# select type,flags,start_lsn,phase,buffers_total,new_requests\n> from pg_stat_progress_checkpoint;\n> type | flags | start_lsn | phase\n> | buffers_total | new_requests\n> ------------+-----------------------+------------+-----------------------+---------------+--------------\n> checkpoint | immediate force wait | 1/E6C523A8 | checkpointing\n> buffers | 1024275 | false\n> (1 row)\n>\n> is a little bit different from what is logged once completed:\n>\n> 2022-11-04 08:18:50.806 UTC [3488442] LOG: checkpoint complete: wrote\n> 1024278 buffers (97.7%);\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 17:11:52 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Fri, Nov 4, 2022 at 4:27 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> Please find attached a rebase in v7.\n\nI don't think it's a good thing that this patch is using the\nprogress-reporting machinery. The point of that machinery is that we\nwant any backend to be able to report progress for any command it\nhappens to be running, and we don't know which command that will be at\nany given point in time, or how many backends will be running any\ngiven command at once. So we need some generic set of counters that\ncan be repurposed for whatever any particular backend happens to be\ndoing right at the moment.\n\nBut none of that applies to the checkpointer. Any information about\nthe checkpointer that we want to expose can just be advertised in a\ndedicated chunk of shared memory, perhaps even by simply adding it to\nCheckpointerShmemStruct. Then you can give the fields whatever names,\ntypes, and sizes you like, and you don't have to do all of this stuff\nwith mapping down to integers and back. The only real disadvantage\nthat I can see is then you have to think a bit harder about what the\nconcurrency model is here, and maybe you end up reimplementing\nsomething similar to what the progress-reporting stuff does for you,\nand *maybe* that is a sufficient reason to do it this way.\n\nBut I'm doubtful. This feels like a square-peg-round-hole situation.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 15:04:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-11-04 09:25:52 +0100, Drouvot, Bertrand wrote:\n> \n> @@ -7023,29 +7048,63 @@ static void\n> CheckPointGuts(XLogRecPtr checkPointRedo, int flags)\n> {\n> \tCheckPointRelationMap();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_REPLI_SLOTS);\n> \tCheckPointReplicationSlots();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_SNAPSHOTS);\n> \tCheckPointSnapBuild();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_LOGICAL_REWRITE_MAPPINGS);\n> \tCheckPointLogicalRewriteHeap();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_REPLI_ORIGIN);\n> \tCheckPointReplicationOrigin();\n> \n> \t/* Write out all dirty data in SLRUs and the main buffer pool */\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_START(flags);\n> \tCheckpointStats.ckpt_write_t = GetCurrentTimestamp();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_CLOG_PAGES);\n> \tCheckPointCLOG();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_COMMITTS_PAGES);\n> \tCheckPointCommitTs();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_SUBTRANS_PAGES);\n> \tCheckPointSUBTRANS();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_MULTIXACT_PAGES);\n> \tCheckPointMultiXact();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_PREDICATE_LOCK_PAGES);\n> \tCheckPointPredicate();\n> +\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_BUFFERS);\n> \tCheckPointBuffers(flags);\n> \n> \t/* Perform all queued up fsyncs */\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_SYNC_START();\n> \tCheckpointStats.ckpt_sync_t = GetCurrentTimestamp();\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_SYNC_FILES);\n> \tProcessSyncRequests();\n> \tCheckpointStats.ckpt_sync_end_t = GetCurrentTimestamp();\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_DONE();\n> \n> \t/* We deliberately delay 2PC checkpointing as long as possible */\n> +\tpgstat_progress_update_param(PROGRESS_CHECKPOINT_PHASE,\n> +\t\t\t\t\t\t\t\t PROGRESS_CHECKPOINT_PHASE_TWO_PHASE);\n> \tCheckPointTwoPhase(checkPointRedo);\n> }\n\nThis is quite the code bloat. Can we make this less duplicative?\n\n\n> +CREATE VIEW pg_stat_progress_checkpoint AS\n> + SELECT\n> + S.pid AS pid,\n> + CASE S.param1 WHEN 1 THEN 'checkpoint'\n> + WHEN 2 THEN 'restartpoint'\n> + END AS type,\n> + ( CASE WHEN (S.param2 & 4) > 0 THEN 'immediate ' ELSE '' END ||\n> + CASE WHEN (S.param2 & 8) > 0 THEN 'force ' ELSE '' END ||\n> + CASE WHEN (S.param2 & 16) > 0 THEN 'flush-all ' ELSE '' END ||\n> + CASE WHEN (S.param2 & 32) > 0 THEN 'wait ' ELSE '' END ||\n> + CASE WHEN (S.param2 & 128) > 0 THEN 'wal ' ELSE '' END ||\n> + CASE WHEN (S.param2 & 256) > 0 THEN 'time ' ELSE '' END\n> + ) AS flags,\n> + ( '0/0'::pg_lsn +\n> + ((CASE\n> + WHEN S.param3 < 0 THEN pow(2::numeric, 64::numeric)::numeric\n> + ELSE 0::numeric\n> + END) +\n> + S.param3::numeric)\n> + ) AS start_lsn,\n\nI don't think we should embed this much complexity in the view\ndefintions. It's hard to read, bloats the catalog, we can't fix them once\nreleased. This stuff seems like it should be in a helper function.\n\nI don't have any iea what that pow stuff is supposed to be doing.\n\n\n> + to_timestamp(946684800 + (S.param4::float8 / 1000000)) AS start_time,\n\nI don't think this is a reasonable path - embedding way too much low-level\ndetails about the timestamp format in the view definition. Why do we need to\ndo this?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:18:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 1:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Nov 4, 2022 at 4:27 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n> > Please find attached a rebase in v7.\n>\n> I don't think it's a good thing that this patch is using the\n> progress-reporting machinery. The point of that machinery is that we\n> want any backend to be able to report progress for any command it\n> happens to be running, and we don't know which command that will be at\n> any given point in time, or how many backends will be running any\n> given command at once. So we need some generic set of counters that\n> can be repurposed for whatever any particular backend happens to be\n> doing right at the moment.\n\nHm.\n\n> But none of that applies to the checkpointer. Any information about\n> the checkpointer that we want to expose can just be advertised in a\n> dedicated chunk of shared memory, perhaps even by simply adding it to\n> CheckpointerShmemStruct. Then you can give the fields whatever names,\n> types, and sizes you like, and you don't have to do all of this stuff\n> with mapping down to integers and back. The only real disadvantage\n> that I can see is then you have to think a bit harder about what the\n> concurrency model is here, and maybe you end up reimplementing\n> something similar to what the progress-reporting stuff does for you,\n> and *maybe* that is a sufficient reason to do it this way.\n\n-1 for CheckpointerShmemStruct as it is being used for running\ncheckpoints and I don't think adding stats to it is a great idea.\nInstead, extending PgStat_CheckpointerStats and using shared memory\nstats for reporting progress/last checkpoint related stats is a good\nidea IMO. I also think that a new pg_stat_checkpoint view is needed\nbecause, right now, the PgStat_CheckpointerStats stats are exposed via\nthe pg_stat_bgwriter view, having a separate view for checkpoint stats\nis good here. Also, removing CheckpointStatsData and moving all of\nthose members to PgStat_CheckpointerStats, of course, by being careful\nabout the amount of shared memory required, is also a good idea IMO.\nGoing forward, PgStat_CheckpointerStats and pg_stat_checkpoint view\ncan be a single point of location for all the checkpoint related\nstats.\n\nThoughts?\n\nIn fact, I was recently having an off-list chat with Bertrand Drouvot\nabout the above idea.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:01:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 16:01:55 +0530, Bharath Rupireddy wrote:\n> -1 for CheckpointerShmemStruct as it is being used for running\n> checkpoints and I don't think adding stats to it is a great idea.\n\nWhy? Imo the data needed for progress reporting aren't really \"stats\". We'd\nnot accumulate counters over time, just for the current checkpoint.\n\nI think it might even be useful for other parts of the system to know what the\ncheckpointer is doing, e.g. bgwriter or autovacuum could adapt the behaviour\nif checkpointer can't keep up. Somehow it'd feel wrong to use the stats system\nas the source of such adjustments - but perhaps my gut feeling on that isn't\nright.\n\nThe best argument for combining progress reporting with accumulating stats is\nthat we could likely share some of the code. Having accumulated stats for all\nthe checkpoint phases would e.g. be quite valuable.\n\n\n> Instead, extending PgStat_CheckpointerStats and using shared memory\n> stats for reporting progress/last checkpoint related stats is a good\n> idea IMO\n\nThere's certainly some potential for deduplicating state and to make stats\nupdated more frequently. But that doesn't necessarily mean that putting the\ncheckpoint progress into PgStat_CheckpointerStats is a good idea (nor the\nopposite).\n\n\n> I also think that a new pg_stat_checkpoint view is needed\n> because, right now, the PgStat_CheckpointerStats stats are exposed via\n> the pg_stat_bgwriter view, having a separate view for checkpoint stats\n> is good here.\n\nI agree that we should do that, but largely independent of the architectural\nquestion at hand.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:14:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 5:32 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> -1 for CheckpointerShmemStruct as it is being used for running\n> checkpoints and I don't think adding stats to it is a great idea.\n> Instead, extending PgStat_CheckpointerStats and using shared memory\n> stats for reporting progress/last checkpoint related stats is a good\n> idea IMO.\n\nI agree with Andres: progress reporting isn't really quite the same\nthing as stats, and either place seems like it could be reasonable. I\ndon't presently have an opinion on which is a better fit, but I don't\nthink the fact that CheckpointerShmemStruct is used for running\ncheckpoints rules anything out. Progress reporting is *also* about\nrunning checkpoints. Any historical data you want to expose might not\nbe about running checkpoints, but, uh, so what? I don't really see\nthat as a strong argument against it fitting into this struct.\n\n> I also think that a new pg_stat_checkpoint view is needed\n> because, right now, the PgStat_CheckpointerStats stats are exposed via\n> the pg_stat_bgwriter view, having a separate view for checkpoint stats\n> is good here.\n\nYep.\n\n> Also, removing CheckpointStatsData and moving all of\n> those members to PgStat_CheckpointerStats, of course, by being careful\n> about the amount of shared memory required, is also a good idea IMO.\n> Going forward, PgStat_CheckpointerStats and pg_stat_checkpoint view\n> can be a single point of location for all the checkpoint related\n> stats.\n\nI'm not sure that I completely follow this part, or that I agree with\nit. I have never really understood why we drive background writer or\ncheckpointer statistics through the statistics collector. Here again,\nfor things like table statistics, there is no choice, because we could\nhave an unbounded number of tables and need to keep statistics about\nall of them. The statistics collector can handle that by allocating\nmore memory as required. But there is only one background writer and\nonly one checkpointer, so that is not needed in those cases. Why not\njust have them expose anything they want to expose through shared\nmemory directly?\n\nIf the statistics collector provides services that we care about, like\npersisting data across restarts or making snapshots for transactional\nbehavior, then those might be reasons to go through it even for the\nbackground writer or checkpointer. But if so, we should be explicit\nabout what the reasons are, both in the mailing list discussion and in\ncode comments. Otherwise I fear that we'll just end up doing something\nin a more complicated way than is really necessary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 14:19:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 14:19:32 -0500, Robert Haas wrote:\n> I have never really understood why we drive background writer or\n> checkpointer statistics through the statistics collector.\n\nTo some degree it is required for durability - the stats system needs to know\nhow to write out those stats. But that wasn't ever a good reason to send\nmessages to the stats collector - it could just read the stats from shared\nmemory after all.\n\nThere's also integration with snapshots of the stats, resetting them, etc.\n\nThere's also the complexity that some of the stats e.g. for checkpointer\naren't about work the checkpointer did, but just have ended up there for\nhistorical raisins. E.g. the number of fsyncs and writes done by backends.\n\nSee below:\n\n> Here again, for things like table statistics, there is no choice, because we\n> could have an unbounded number of tables and need to keep statistics about\n> all of them. The statistics collector can handle that by allocating more\n> memory as required. But there is only one background writer and only one\n> checkpointer, so that is not needed in those cases. Why not just have them\n> expose anything they want to expose through shared memory directly?\n\nThat's how it is in 15+. The memory for \"fixed-numbered\" or \"global\"\nstatistics are maintained by the stats system, but in plain shared memory,\nallocated at server start. Not via the hash table.\n\nRight now stats updates for the checkpointer use the \"changecount\" approach to\nupdates. For now that makes sense, because we update the stats only\noccasionally (after a checkpoint or when writing in CheckpointWriteDelay()) -\na stats viewer seeing the checkpoint count go up, without yet seeing the\ncorresponding buffers written would be misleading.\n\nI don't think we'd want every buffer write or whatnot go through the\nchangecount mechanism, on some non-x86 platforms that could be noticable. But\nif we didn't stage the stats updates locally I think we could make most of the\nstats changes without that overhead. For updates that just increment a single\ncounter there's simply no benefit in the changecount mechanism afaict.\n\nI didn't want to do that change during the initial shared memory stats work,\nit already was bigger than I could handle...\n\n\nIt's not quite clear to me what the best path forward is for\nbuf_written_backend / buf_fsync_backend, which currently are reported via the\ncheckpointer stats. I think the best path might be to stop counting them via\nthe CheckpointerShmem->num_backend_writes etc and just populate the fields in\nthe view (for backward compat) via the proposed [1] pg_stat_io patch. Doing\nthat accounting with CheckpointerCommLock held exclusively isn't free.\n\n\n\n> If the statistics collector provides services that we care about, like\n> persisting data across restarts or making snapshots for transactional\n> behavior, then those might be reasons to go through it even for the\n> background writer or checkpointer. But if so, we should be explicit\n> about what the reasons are, both in the mailing list discussion and in\n> code comments. Otherwise I fear that we'll just end up doing something\n> in a more complicated way than is really necessary.\n\nI tried to provide at least some of that in the comments at the start of\npgstat.c in 15+. There's very likely more that should be added, but I think\nit's a decent start.\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://www.postgresql.org/message-id/CAOtHd0ApHna7_p6mvHoO%2BgLZdxjaQPRemg3_o0a4ytCPijLytQ%40mail.gmail.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 11:52:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 12:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > I also think that a new pg_stat_checkpoint view is needed\n> > because, right now, the PgStat_CheckpointerStats stats are exposed via\n> > the pg_stat_bgwriter view, having a separate view for checkpoint stats\n> > is good here.\n>\n> Yep.\n\nOn Wed, Nov 16, 2022 at 11:44 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I also think that a new pg_stat_checkpoint view is needed\n> > because, right now, the PgStat_CheckpointerStats stats are exposed via\n> > the pg_stat_bgwriter view, having a separate view for checkpoint stats\n> > is good here.\n>\n> I agree that we should do that, but largely independent of the architectural\n> question at hand.\n\nThanks. I quickly prepared a patch introducing pg_stat_checkpointer\nview and posted it here -\nhttps://www.postgresql.org/message-id/CALj2ACVxX2ii%3D66RypXRweZe2EsBRiPMj0aHfRfHUeXJcC7kHg%40mail.gmail.com.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 19:01:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think we'd want every buffer write or whatnot go through the\n> changecount mechanism, on some non-x86 platforms that could be noticable. But\n> if we didn't stage the stats updates locally I think we could make most of the\n> stats changes without that overhead. For updates that just increment a single\n> counter there's simply no benefit in the changecount mechanism afaict.\n\nYou might be right, but I'm not sure whether it's worth stressing\nabout. The progress reporting mechanism uses the st_changecount\nmechanism, too, and as far as I know nobody's complained about that\nhaving too much overhead. Maybe they have, though, and I've just\nmissed it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 09:03:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 09:03:32 -0500, Robert Haas wrote:\n> On Wed, Nov 16, 2022 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think we'd want every buffer write or whatnot go through the\n> > changecount mechanism, on some non-x86 platforms that could be noticable. But\n> > if we didn't stage the stats updates locally I think we could make most of the\n> > stats changes without that overhead. For updates that just increment a single\n> > counter there's simply no benefit in the changecount mechanism afaict.\n>\n> You might be right, but I'm not sure whether it's worth stressing\n> about. The progress reporting mechanism uses the st_changecount\n> mechanism, too, and as far as I know nobody's complained about that\n> having too much overhead. Maybe they have, though, and I've just\n> missed it.\n\nI've seen it in profiles, although not as the major contributor. Most places\ndo a reasonable amount of work between calls though.\n\nAs an experiment, I added a progress report to BufferSync()'s first loop\n(i.e. where it checks all buffers). On a 128GB shared_buffers cluster that\nincreases the time for a do-nothing checkpoint from ~235ms to ~280ms. If I\nremove the changecount stuff and use a single write + write barrier, it ends\nup as 250ms. Inlining brings it down a bit further, to 247ms.\n\nObviously this is a very extreme case - we only do very little work between\nthe progress report calls. But it does seem to show that the overhead is not\nentirely neglegible.\n\n\nI think pgstat_progress_start_command() needs the changecount stuff, as does\npgstat_progress_update_multi_param(). But for anything updating a single\nparameter at a time it really doesn't do anything useful on a platform that\ndoesn't tear 64bit writes (so it could be #ifdef\nPG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY).\n\n\nOut of further curiosity I wanted to test the impact when the loop doesn't\neven do a LockBufHdr() and added an unlocked pre-check. 109ms without\nprogress. 138ms with. 114ms with the simplified\npgstat_progress_update_param(). 108ms after inlining the simplified\npgstat_progress_update_param().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Nov 2022 08:24:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think pgstat_progress_start_command() needs the changecount stuff, as does\n> pgstat_progress_update_multi_param(). But for anything updating a single\n> parameter at a time it really doesn't do anything useful on a platform that\n> doesn't tear 64bit writes (so it could be #ifdef\n> PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY).\n\nSeems safe to restrict it to that case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 12:18:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:\n> As an experiment, I added a progress report to BufferSync()'s first loop\n> (i.e. where it checks all buffers). On a 128GB shared_buffers cluster that\n> increases the time for a do-nothing checkpoint from ~235ms to ~280ms. If I\n> remove the changecount stuff and use a single write + write barrier, it ends\n> up as 250ms. Inlining brings it down a bit further, to 247ms.\n\nOK, I'd say that's pretty good evidence that we can't totally\ndisregard the issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 12:21:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" }, { "msg_contents": "Hi,\n\nOn 2022-11-04 09:25:52 +0100, Drouvot, Bertrand wrote:\n> Please find attached a rebase in v7.\n\ncfbot complains that the docs don't build:\nhttps://cirrus-ci.com/task/6694349031866368?logs=docs_build#L296\n\n[03:24:27.317] ref/checkpoint.sgml:66: element para: validity error : Element para is not declared in para list of possible children\n\nI've marked the patch as waitin-on-author for now.\n\n\nThere's been a bunch of architectural feedback too, but tbh, I don't know if\nwe came to any conclusion on that front...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 11:03:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint\n (was: Report checkpoint progress in server logs)" }, { "msg_contents": "On Thu, 8 Dec 2022 at 00:33, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-04 09:25:52 +0100, Drouvot, Bertrand wrote:\n> > Please find attached a rebase in v7.\n>\n> cfbot complains that the docs don't build:\n> https://cirrus-ci.com/task/6694349031866368?logs=docs_build#L296\n>\n> [03:24:27.317] ref/checkpoint.sgml:66: element para: validity error : Element para is not declared in para list of possible children\n>\n> I've marked the patch as waitin-on-author for now.\n>\n>\n> There's been a bunch of architectural feedback too, but tbh, I don't know if\n> we came to any conclusion on that front...\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:16:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report checkpoint progress with pg_stat_progress_checkpoint (was:\n Report checkpoint progress in server logs)" } ]
[ { "msg_contents": "Hi,\n\nCurrently all the storage options for a table are very much specific\nto the heap but a different AM might need some user defined AM\nspecific parameters to help tune the AM. So here is a patch which\nprovides an AM level routine so that instead of getting parameters\nvalidated using “heap_reloptions” it will call the registered AM\nroutine.\n\ne.g:\n-- create a new access method and table using this access method\nCREATE ACCESS METHOD myam TYPE TABLE HANDLER <new_tableam_handler>;\n\nCREATE TABLE mytest (a int) USING myam ;\n\n--a new parameter is to set storage parameter for only myam as below\nALTER TABLE mytest(squargle_size = '100MB');\n\nThe user-defined parameters will have meaning only for the \"myam\",\notherwise error will be thrown. Our relcache already allows the\nAM-specific cache to be stored for each relation.\n\nOpen Question: When a user changes AM, then what should be the\nbehavior for not supported storage options? Should we drop the options\nand go with only system storage options?\nOr throw an error, in which case the user has to clean the added parameters.\n\nThanks & Regards\nSadhuPrasad\nhttp://www.EnterpriseDB.com/", "msg_date": "Wed, 29 Dec 2021 22:38:11 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Wed, Dec 29, 2021 at 10:38 PM Sadhuprasad Patro <b.sadhu@gmail.com>\nwrote:\n\n> Hi,\n>\n> Currently all the storage options for a table are very much specific\n> to the heap but a different AM might need some user defined AM\n> specific parameters to help tune the AM. So here is a patch which\n> provides an AM level routine so that instead of getting parameters\n> validated using “heap_reloptions” it will call the registered AM\n> routine.\n>\n\n+1 for the idea.\n\n\n>\n> e.g:\n> -- create a new access method and table using this access method\n> CREATE ACCESS METHOD myam TYPE TABLE HANDLER <new_tableam_handler>;\n>\n> CREATE TABLE mytest (a int) USING myam ;\n>\n> --a new parameter is to set storage parameter for only myam as below\n> ALTER TABLE mytest(squargle_size = '100MB');\n>\n\nThis will work for CREATE TABLE as well I guess as normal relation storage\nparameter works now right?\n\n\n> The user-defined parameters will have meaning only for the \"myam\",\n> otherwise error will be thrown. Our relcache already allows the\n> AM-specific cache to be stored for each relation.\n>\n> Open Question: When a user changes AM, then what should be the\n> behavior for not supported storage options? Should we drop the options\n> and go with only system storage options?\n> Or throw an error, in which case the user has to clean the added\n> parameters.\n>\n\nIMHO, if the user is changing the access method for the table then it\nshould be fine to throw an error if there are some parameters which are not\nsupported by the new AM. So that user can take a calculative call and\nfirst remove those storage options before changing the AM.\n\nI have a few comments on the patch, mostly cosmetics.\n\n1.\n+ Assert(routine->taboptions != NULL);\n\nWhy AM is not allowed to register the NULL function, if NULL is registered\nthat means the AM\ndoes not support any of the storage parameters.\n2.\n@@ -1358,6 +1358,7 @@ untransformRelOptions(Datum options)\n return result;\n }\n\n+\n /*\n * Extract and parse reloptions from a pg_class tuple.\n *\n\nUnwanted hunk (added extra line)\n\n3.\n+ * Parse options for heaps, views and toast tables. This is\n+ * implementation of relOptions for access method heapam.\n */\n\nBetter to say access method heap instead of heapam.\n4.\n+ * Parse options for tables.\n+ *\n+ * taboptions tables AM's option parser function\n+ * reloptions options as text[] datum\n+ * validate error flag\n\nFunction header comment formatting is not proper, it also has uneven\nspacing between words.\n5.\n-extract_autovac_opts(HeapTuple tup, TupleDesc pg_class_desc)\n+extract_autovac_opts(HeapTuple tup, TupleDesc pg_class_desc,\n+ reloptions_function taboptions)\n\nIndentation is not proper, run pgindent on this.\n\n5.\n>Currently all the storage options for a table are very much specific to\nthe heap but a different AM might need some user defined AM specific\nparameters to help tune the AM. So here is a patch which provides an AM\nlevel routine so that instead of getting >parameters validated using\n“heap_reloptions” it will call the registered AM routine.\n\nWrap these long commit message lines at 80 characters.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Dec 29, 2021 at 10:38 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:Hi,\n\nCurrently all the storage options for a table are very much specific\nto the heap but a different AM might need some user defined AM\nspecific parameters to help tune the AM. So here is a patch which\nprovides an AM level routine so that instead of getting parameters\nvalidated using “heap_reloptions” it will call the registered AM\nroutine.+1 for the idea. \n\ne.g:\n-- create a new access method and table using this access method\nCREATE ACCESS METHOD myam TYPE TABLE HANDLER <new_tableam_handler>;\n\nCREATE TABLE mytest (a int) USING myam ;\n\n--a new parameter is to set storage parameter for only myam as below\nALTER TABLE mytest(squargle_size = '100MB');This will work for CREATE TABLE as well I guess as normal relation storage parameter works now right?\n\nThe user-defined parameters will have meaning only for the \"myam\",\notherwise error will be thrown. Our relcache already allows the\nAM-specific cache to be stored for each relation.\n\nOpen Question: When a user changes AM, then what should be the\nbehavior for not supported storage options? Should we drop the options\nand go with only system storage options?\nOr throw an error, in which case the user has to clean the added parameters.IMHO, if the user is changing the access method for the table then it should be fine to throw an error if there are some parameters which are not supported by the new AM.  So that user can take a calculative call and first remove those storage options before changing the AM.I have a few comments on the patch, mostly cosmetics.1.+\tAssert(routine->taboptions != NULL);Why AM is not allowed to register the NULL function, if NULL is registered that means the AMdoes not support any of the storage parameters.2.@@ -1358,6 +1358,7 @@ untransformRelOptions(Datum options) \treturn result; } + /*  * Extract and parse reloptions from a pg_class tuple.  *Unwanted hunk (added extra line)3.+ * Parse options for heaps, views and toast tables. This is+ * implementation of relOptions for access method heapam.  */Better to say access method heap instead of heapam.4.+ * Parse options for tables.+ *+ *\ttaboptions\ttables AM's option parser function+ *      reloptions\toptions as text[] datum+ *      validate\terror flagFunction header comment formatting is not proper, it also has uneven spacing between words.5.-extract_autovac_opts(HeapTuple tup, TupleDesc pg_class_desc)+extract_autovac_opts(HeapTuple tup, TupleDesc pg_class_desc,+\t\t\t\t\treloptions_function taboptions)Indentation is not proper, run pgindent on this.5.>Currently all the storage options for a table are very much specific to the heap but a different AM might need some user defined AM specific parameters to help tune the AM. So here is a patch which provides an AM level routine so that instead of getting >parameters validated using “heap_reloptions” it will call the registered AM routine.Wrap these long commit message lines at 80 characters.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 13:14:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Wed, Dec 29, 2021 at 10:38 PM Sadhuprasad Patro <b.sadhu@gmail.com>\nwrote:\n\n> Hi,\n>\n> Currently all the storage options for a table are very much specific\n> to the heap but a different AM might need some user defined AM\n> specific parameters to help tune the AM. So here is a patch which\n> provides an AM level routine so that instead of getting parameters\n> validated using “heap_reloptions” it will call the registered AM\n> routine.\n>\n>\nThis is a good idea. +1.\n\ne.g:\n> -- create a new access method and table using this access method\n> CREATE ACCESS METHOD myam TYPE TABLE HANDLER <new_tableam_handler>;\n>\n> CREATE TABLE mytest (a int) USING myam ;\n>\n> --a new parameter is to set storage parameter for only myam as below\n> ALTER TABLE mytest(squargle_size = '100MB');\n>\n\nI syntax here is, ALTER TABLE <table_name> SET ( attribute_option = value\n);\n\n\n> The user-defined parameters will have meaning only for the \"myam\",\n> otherwise error will be thrown. Our relcache already allows the\n> AM-specific cache to be stored for each relation.\n>\n> Open Question: When a user changes AM, then what should be the\n> behavior for not supported storage options? Should we drop the options\n> and go with only system storage options?\n> Or throw an error, in which case the user has to clean the added\n> parameters.\n>\n\nI think throwing an error makes more sense, so that the user can clean\nthat.\n\nHere are a few quick cosmetic review comments:\n\n1)\n\n> @@ -1372,7 +1373,8 @@ untransformRelOptions(Datum options)\n> */\n> bytea *\n> extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> - amoptions_function amoptions)\n> + amoptions_function amoptions,\n> + reloptions_function taboptions)\n>\n\nIndentation has been changed and needs to be fixed.\n\n2)\n\n case RELKIND_MATVIEW:\n> options = table_reloptions(taboptions, classForm->relkind,\n> datum, false);\n> break;\n>\n\nGoing beyond line limit.\n\n3)\n\ndiff --git a/src/backend/access/heap/heapam_handler.c\n> b/src/backend/access/heap/heapam_handler.c\n> index 9befe01..6324d7e 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -2581,6 +2581,7 @@ static const TableAmRoutine heapam_methods = {\n> .index_build_range_scan = heapam_index_build_range_scan,\n> .index_validate_scan = heapam_index_validate_scan,\n>\n> + .taboptions = heap_reloptions,\n>\n\nInstead of taboptions can name this as relation_options to be in sink with\nother members.\n\n4)\n\n@@ -2427,7 +2428,7 @@ do_autovacuum(void)\n> */\n> MemoryContextSwitchTo(AutovacMemCxt);\n> tab = table_recheck_autovac(relid, table_toast_map, pg_class_desc,\n> - effective_multixact_freeze_max_age);\n> + classRel->rd_tableam->taboptions, effective_multixact_freeze_max_age);\n> if (tab == NULL)\n>\n\nSplit the another added parameter to function in the next line.\n\n5)\n\nOverall patch has many indentation issues, I would suggest running the\npgindent to fix those.\n\n\n\nRegards\nRushabh Lathia\nwww.EnterpriseDB.com\n\nOn Wed, Dec 29, 2021 at 10:38 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:Hi,\n\nCurrently all the storage options for a table are very much specific\nto the heap but a different AM might need some user defined AM\nspecific parameters to help tune the AM. So here is a patch which\nprovides an AM level routine so that instead of getting parameters\nvalidated using “heap_reloptions” it will call the registered AM\nroutine.\nThis is a good idea. +1. \ne.g:\n-- create a new access method and table using this access method\nCREATE ACCESS METHOD myam TYPE TABLE HANDLER <new_tableam_handler>;\n\nCREATE TABLE mytest (a int) USING myam ;\n\n--a new parameter is to set storage parameter for only myam as below\nALTER TABLE mytest(squargle_size = '100MB');I syntax here is,  ALTER TABLE <table_name> SET ( attribute_option = value );\n\nThe user-defined parameters will have meaning only for the \"myam\",\notherwise error will be thrown. Our relcache already allows the\nAM-specific cache to be stored for each relation.\n\nOpen Question: When a user changes AM, then what should be the\nbehavior for not supported storage options? Should we drop the options\nand go with only system storage options?\nOr throw an error, in which case the user has to clean the added parameters.I think throwing an error makes more sense, so that the user can clean that.  Here are a few quick cosmetic review comments:1)@@ -1372,7 +1373,8 @@ untransformRelOptions(Datum options)  */ bytea * extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,-\t\t\t\t  amoptions_function amoptions)+\t\t\t\tamoptions_function amoptions,+\t\t\t\treloptions_function taboptions)Indentation has been changed and needs to be fixed. 2)  case RELKIND_MATVIEW:            options = table_reloptions(taboptions, classForm->relkind, datum, false);\t\t\tbreak;Going beyond line limit.3)diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.cindex 9befe01..6324d7e 100644--- a/src/backend/access/heap/heapam_handler.c+++ b/src/backend/access/heap/heapam_handler.c@@ -2581,6 +2581,7 @@ static const TableAmRoutine heapam_methods = { \t.index_build_range_scan = heapam_index_build_range_scan, \t.index_validate_scan = heapam_index_validate_scan, +\t.taboptions = heap_reloptions,Instead of taboptions can name this as relation_options to be in sink with other members. 4) @@ -2427,7 +2428,7 @@ do_autovacuum(void) \t\t */ \t\tMemoryContextSwitchTo(AutovacMemCxt); \t\ttab = table_recheck_autovac(relid, table_toast_map, pg_class_desc,-\t\t\t\t\t\t\t\t\teffective_multixact_freeze_max_age);+\t\t\t\t\t\tclassRel->rd_tableam->taboptions, effective_multixact_freeze_max_age); \t\tif (tab == NULL)Split the another added parameter to function in the next line. 5)Overall patch has many indentation issues, I would suggest running thepgindent to fix those.RegardsRushabh Lathiawww.EnterpriseDB.com", "msg_date": "Tue, 4 Jan 2022 13:02:36 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "Big +1, this is a great addition!\n\nI think it would be very useful if there were some tests for this new\nfeature. Something similar to the tests for storage parameters for\nindex AMs in src/test/modules/dummy_index_am.\n\nApart from that I think the documentation for table storage parameters\nneeds to be updated in doc/src/sgml/ref/create_table.sgml. It now\nneeds to indicate that these parameters are different for each table\naccess method. Similar to this paragraph in the create index storage\nparameter section of the docs:\n\n> Each index method has its own set of allowed storage parameters.\n> The B-tree, hash, GiST and SP-GiST index methods all accept this\n> parameter\n\nJelte\n\n\n", "msg_date": "Mon, 17 Jan 2022 12:17:16 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Mon, Jan 17, 2022 at 4:47 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> Big +1, this is a great addition!\n>\n> I think it would be very useful if there were some tests for this new\n> feature. Something similar to the tests for storage parameters for\n> index AMs in src/test/modules/dummy_index_am.\n>\nSure, I will refer to the index AM test and add the test cases needed.\n\n> Apart from that I think the documentation for table storage parameters\n> needs to be updated in doc/src/sgml/ref/create_table.sgml. It now\n> needs to indicate that these parameters are different for each table\n> access method. Similar to this paragraph in the create index storage\n> parameter section of the docs:\n\nSure, I will add the documentation part for this.\n\nAs of now, I have fixed the comments from Dilip & Rushabh and have\ndone some more changes after internal testing and review. Please find\nthe latest patch attached.\n\nThanks & Regards\nSadhuPrasad\nwww.EnterpriseDB.com", "msg_date": "Tue, 18 Jan 2022 22:44:10 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Tue, 2022-01-18 at 22:44 +0530, Sadhuprasad Patro wrote:\n> As of now, I have fixed the comments from Dilip & Rushabh and have\n> done some more changes after internal testing and review. Please find\n> the latest patch attached.\n\nHi,\n\nThank you for working on this! Some questions/comments:\n\nAt a high level, it seems there are some options that are common to all\ntables, regardless of the AM. For instance, the vacuum/autovacuum\noptions. (Even if the AM doesn't require vacuum, then it needs to at\nleast be able to communicate that somehow.) I think parallel_workers\nand user_catalog_table also fit into this category. That means we need\nall of StdRdOptions to be the same, with the possible exception of\ntoast_tuple_target and/or fillfactor.\n\nThe current patch just leaves it up to the AM to return a bytea that\ncan be cast to StdRdOptions, which seems like a fragile API.\n\nThat makes me think that what we really want is to have *extra* options\nfor a table AM, not an entirely custom set. Do you agree?\n\nIf so, I suggest you refactor so that if validation doesn't recognize a\nparameter, it calls a table AM method to validate it, and lets it in if\nvalidation succeeds. That way all the stuff around StdRdOptions is\nunchanged. When the table AM needs the parameter value, it can parse\npg_class.reloptions for itself and save it in rd_amcache.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 10:37:54 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Wed, Dec 29, 2021 at 12:08 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> Open Question: When a user changes AM, then what should be the\n> behavior for not supported storage options? Should we drop the options\n> and go with only system storage options?\n> Or throw an error, in which case the user has to clean the added parameters.\n\nA few people have endorsed the error behavior here but I foresee problems.\n\nImagine that I am using the \"foo\" tableam with \"compression=lots\" and\nI want to switch to the \"bar\" AM which does not support that option.\nIf I remove the \"compression=lots\" option using a separate command,\nthe \"foo\" table AM may rewrite my whole table and decompress\neverything. Then when I convert to the \"bar\" AM it's going to have to\nbe rewritten again. That's painful. I clearly need some way to switch\nAMs without having to rewrite the table twice.\n\nIt's also interesting to consider the other direction. If I am\nswitching from \"bar\" to \"foo\" I would really like to be able to add\nthe \"compression=lots\" option at the same time I make the switch.\nThere needs to be some syntax for that.\n\nOne way to solve the first of these problem is to silently drop\nunsupported options. Maybe a better way is to have syntax that allows\nyou to specify options to be added and removed at the time you switch\nAMs e.g.:\n\nALTER TABLE mytab SET ACCESS METHOD bar OPTIONS (DROP compression);\nALTER TABLE mytab SET ACCESS METHOD foo OPTIONS (ADD compression 'lots');\n\nI don't like that particular syntax a ton personally but it does match\nwhat we already use for ALTER SERVER. Unfortunately it's wildly\ninconsistent with what we do for ALTER TABLE. Another idea might be\nsomething like:\n\nALTER TABLE mytab SET ACCESS METHOD bar RESET compression;\nALTER TABLE mytab SET ACCESS METHOD foo SET compression = 'lots';\n\nYou'd need to be able to do multiple things with one command e.g.\n\nALTER TABLE mytab SET ACCESS METHOD baz RESET compression, SET\npreferred_fruit = 'banana';\n\nRegardless of the details, I don't think it's viable to just say,\nwell, rewrite the table multiple times if that's what it takes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:05:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Sat, Feb 12, 2022 at 2:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> Imagine that I am using the \"foo\" tableam with \"compression=lots\" and\n> I want to switch to the \"bar\" AM which does not support that option.\n> If I remove the \"compression=lots\" option using a separate command,\n> the \"foo\" table AM may rewrite my whole table and decompress\n> everything. Then when I convert to the \"bar\" AM it's going to have to\n> be rewritten again. That's painful. I clearly need some way to switch\n> AMs without having to rewrite the table twice.\n>\n\nI agree with you, if we force users to drop the option as a separate\ncommand then we will have to rewrite the table twice.\n\n\n> It's also interesting to consider the other direction. If I am\n> switching from \"bar\" to \"foo\" I would really like to be able to add\n> the \"compression=lots\" option at the same time I make the switch.\n> There needs to be some syntax for that.\n>\n> One way to solve the first of these problem is to silently drop\n> unsupported options. Maybe a better way is to have syntax that allows\n> you to specify options to be added and removed at the time you switch\n> AMs e.g.:\n>\n\n+1\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Feb 12, 2022 at 2:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\nImagine that I am using the \"foo\" tableam with \"compression=lots\" and\nI want to switch to the \"bar\" AM which does not support that option.\nIf I remove the \"compression=lots\" option using a separate command,\nthe \"foo\" table AM may rewrite my whole table and decompress\neverything. Then when I convert to the \"bar\" AM it's going to have to\nbe rewritten again. That's painful. I clearly need some way to switch\nAMs without having to rewrite the table twice.I agree with you, if we force users to drop the option as a separate command then we will have to rewrite the table twice. \nIt's also interesting to consider the other direction. If I am\nswitching from \"bar\" to \"foo\" I would really like to be able to add\nthe \"compression=lots\" option at the same time I make the switch.\nThere needs to be some syntax for that.\n\nOne way to solve the first of these problem is to silently drop\nunsupported options. Maybe a better way is to have syntax that allows\nyou to specify options to be added and removed at the time you switch\nAMs e.g.:+1-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Feb 2022 10:19:13 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "> On Sat, Feb 12, 2022 at 2:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>>\n>> Imagine that I am using the \"foo\" tableam with \"compression=lots\" and\n>> I want to switch to the \"bar\" AM which does not support that option.\n>> If I remove the \"compression=lots\" option using a separate command,\n>> the \"foo\" table AM may rewrite my whole table and decompress\n>> everything. Then when I convert to the \"bar\" AM it's going to have to\n>> be rewritten again. That's painful. I clearly need some way to switch\n>> AMs without having to rewrite the table twice.\n>\nAgreed. Better to avoid multiple rewrites here. Thank you for figuring out this.\n\n\n> You'd need to be able to do multiple things with one command e.g.\n\n> ALTER TABLE mytab SET ACCESS METHOD baz RESET compression, SET\n> preferred_fruit = 'banana';\n\n+1\nSilently dropping some options is not right and it may confuse users\ntoo. So I would like to go\nfor the command you have suggested, where the user should be able to\nSET & RESET multiple\noptions in a single command for an object.\n\nThanks & Regards\nSadhuPrasad\nhttp://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 23:25:25 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Thu, 17 Feb 2022 at 17:55, Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>\n> > On Sat, Feb 12, 2022 at 2:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >>\n> >>\n> >> Imagine that I am using the \"foo\" tableam with \"compression=lots\" and\n> >> I want to switch to the \"bar\" AM which does not support that option.\n> >> If I remove the \"compression=lots\" option using a separate command,\n> >> the \"foo\" table AM may rewrite my whole table and decompress\n> >> everything. Then when I convert to the \"bar\" AM it's going to have to\n> >> be rewritten again. That's painful. I clearly need some way to switch\n> >> AMs without having to rewrite the table twice.\n> >\n> Agreed. Better to avoid multiple rewrites here. Thank you for figuring out this.\n>\n>\n> > You'd need to be able to do multiple things with one command e.g.\n>\n> > ALTER TABLE mytab SET ACCESS METHOD baz RESET compression, SET\n> > preferred_fruit = 'banana';\n>\n> +1\n> Silently dropping some options is not right and it may confuse users\n> too. So I would like to go\n> for the command you have suggested, where the user should be able to\n> SET & RESET multiple\n> options in a single command for an object.\n\nI prefer ADD/DROP to SET/RESET. The latter doesn't convey the meaning\naccurately to me.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 18 Feb 2022 17:17:50 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Fri, Feb 18, 2022 at 10:48 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 17 Feb 2022 at 17:55, Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> >\n> > > On Sat, Feb 12, 2022 at 2:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >>\n> > >>\n> > >> Imagine that I am using the \"foo\" tableam with \"compression=lots\" and\n> > >> I want to switch to the \"bar\" AM which does not support that option.\n> > >> If I remove the \"compression=lots\" option using a separate command,\n> > >> the \"foo\" table AM may rewrite my whole table and decompress\n> > >> everything. Then when I convert to the \"bar\" AM it's going to have to\n> > >> be rewritten again. That's painful. I clearly need some way to switch\n> > >> AMs without having to rewrite the table twice.\n> > >\n> > Agreed. Better to avoid multiple rewrites here. Thank you for figuring out this.\n> >\n> >\n> > > You'd need to be able to do multiple things with one command e.g.\n> >\n> > > ALTER TABLE mytab SET ACCESS METHOD baz RESET compression, SET\n> > > preferred_fruit = 'banana';\n> >\n> > +1\n> > Silently dropping some options is not right and it may confuse users\n> > too. So I would like to go\n> > for the command you have suggested, where the user should be able to\n> > SET & RESET multiple\n> > options in a single command for an object.\n>\n> I prefer ADD/DROP to SET/RESET. The latter doesn't convey the meaning\n> accurately to me.\n\nI have added a dummy test module for table AM and did the document\nchange in the latest patch attached...\nThe Next plan is to provide users to change the AM storage parameters\nswiftly through a single command. I will work on the same and give\nanother version.\n\nAs of now I will go with the ADD/DROP keywords for \"ALTER TABLE\" command.\n\nThanks & Regards\nSadhuPrasad\nhttp://www.EnterpriseDB.com", "msg_date": "Thu, 24 Feb 2022 12:26:08 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "Hi,\n\nOn 2022-02-24 12:26:08 +0530, Sadhuprasad Patro wrote:\n> I have added a dummy test module for table AM and did the document\n> change in the latest patch attached...\n\nThe test module doesn't build on windows, unfortunately... Looks like you need\nto add PGDLLIMPORT to a few variables:\n[01:26:18.539] c:\\cirrus\\src\\test\\modules\\dummy_table_am\\dummy_table_am.c(488): warning C4700: uninitialized local variable 'rel' used [c:\\cirrus\\dummy_table_am.vcxproj]\n[01:26:18.539] dummy_table_am.obj : error LNK2001: unresolved external symbol synchronize_seqscans [c:\\cirrus\\dummy_table_am.vcxproj]\n[01:26:18.539] .\\Debug\\dummy_table_am\\dummy_table_am.dll : fatal error LNK1120: 1 unresolved externals [c:\\cirrus\\dummy_table_am.vcxproj]\n[01:26:18.539] 1 Warning(s)\n[01:26:18.539] 2 Error(s)\n\nhttps://cirrus-ci.com/task/5067519584108544?logs=build#L2085\n\nMarked the CF entry as waiting-on-author.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:54:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "On Tue, Mar 22, 2022 at 7:24 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-24 12:26:08 +0530, Sadhuprasad Patro wrote:\n> > I have added a dummy test module for table AM and did the document\n> > change in the latest patch attached...\n>\n> The test module doesn't build on windows, unfortunately... Looks like you need\n> to add PGDLLIMPORT to a few variables:\n> [01:26:18.539] c:\\cirrus\\src\\test\\modules\\dummy_table_am\\dummy_table_am.c(488): warning C4700: uninitialized local variable 'rel' used [c:\\cirrus\\dummy_table_am.vcxproj]\n> [01:26:18.539] dummy_table_am.obj : error LNK2001: unresolved external symbol synchronize_seqscans [c:\\cirrus\\dummy_table_am.vcxproj]\n> [01:26:18.539] .\\Debug\\dummy_table_am\\dummy_table_am.dll : fatal error LNK1120: 1 unresolved externals [c:\\cirrus\\dummy_table_am.vcxproj]\n> [01:26:18.539] 1 Warning(s)\n> [01:26:18.539] 2 Error(s)\n>\n> https://cirrus-ci.com/task/5067519584108544?logs=build#L2085\n>\n> Marked the CF entry as waiting-on-author.\n\nHI,\nThank you for the feedback Andres. I will take care of the same.\n\nAs of now attached is a new patch on this to support the addition of\nnew option parameters or drop the old parameters through ALTER TABLE\ncommand.\nNeed some more testing on this, which is currently in progress.\nProviding the patch to get early feedback in case of any major\ncomments...\n\nNew Command:\nALTER TABLE name SET ACCESS METHOD amname [ OPTIONS ( ADD | DROP\noption 'value' [, ... ] ) ];\n\n\nThanks & Regards\nSadhuPrasad\nhttp://www.EnterpriseDB.com", "msg_date": "Tue, 22 Mar 2022 09:04:37 +0530", "msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "This patch still has code warnings on the cfbot and I don't think\nthey're platform-specific:\n\n[00:28:28.236] gram.y: In function ‘base_yyparse’:\n[00:28:28.236] gram.y:2851:58: error: passing argument 2 of\n‘makeDefElemExtended’ from incompatible pointer type\n[-Werror=incompatible-pointer-types]\n[00:28:28.236] 2851 | $$ = makeDefElemExtended(NULL, $2, NULL,\nDEFELEM_DROP, @2);\n[00:28:28.236] | ~~~~~~~~~^~~~~~~\n[00:28:28.236] | |\n[00:28:28.236] | DefElem *\n[00:28:28.236] In file included from gram.y:58:\n[00:28:28.236] ../../../src/include/nodes/makefuncs.h:102:60: note:\nexpected ‘char *’ but argument is of type ‘DefElem *’\n[00:28:28.236] 102 | extern DefElem *makeDefElemExtended(char\n*nameSpace, char *name, Node *arg,\n[00:28:28.236] | ~~~~~~^~~~\n\nI gather the patch is still a WIP and ideally we would want to give\nfeedback on patches in CFs when the author's are looking for it but\nthis is the last week before feature freeze and the main focus is on\ncommittable patches. I'll move it to next CF.\n\n\n", "msg_date": "Fri, 1 Apr 2022 11:21:11 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3495/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 13:56:27 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Per-table storage parameters for TableAM/IndexAM extensions" } ]
[ { "msg_contents": "Hi,\n\nIsn't this a corrupted pathname?\n\n2021-12-29 03:39:55.708 CST [79851:1] WARNING: removal of orphan\narchive status file\n\"pg_wal/archive_status/000000010000000000000003.00000028.backup000000010000000000000004.ready\"\nfailed too many times, will try again later\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2021-12-29%2009%3A20%3A54\n\n\n", "msg_date": "Thu, 30 Dec 2021 09:20:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Strange path from pgarch_readyXlog()" }, { "msg_contents": "On 12/29/21, 12:22 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n> Isn't this a corrupted pathname?\r\n>\r\n> 2021-12-29 03:39:55.708 CST [79851:1] WARNING: removal of orphan\r\n> archive status file\r\n> \"pg_wal/archive_status/000000010000000000000003.00000028.backup000000010000000000000004.ready\"\r\n> failed too many times, will try again later\r\n\r\nI bet this was a simple mistake in beb4e9b.\r\n\r\nNathan\r\n\r\ndiff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c\r\nindex 434939be9b..b5b0d4e12f 100644\r\n--- a/src/backend/postmaster/pgarch.c\r\n+++ b/src/backend/postmaster/pgarch.c\r\n@@ -113,7 +113,7 @@ static PgArchData *PgArch = NULL;\r\n * is empty, at which point another directory scan must be performed.\r\n */\r\n static binaryheap *arch_heap = NULL;\r\n-static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS];\r\n+static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS + 1];\r\n static char *arch_files[NUM_FILES_PER_DIRECTORY_SCAN];\r\n static int arch_files_size = 0;\r\n\r\n", "msg_date": "Wed, 29 Dec 2021 20:50:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Strange path from pgarch_readyXlog()" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I bet this was a simple mistake in beb4e9b.\n\n> -static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS];\n> +static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS + 1];\n\nHm, yeah, that looks like a pretty obvious bug.\n\nWhile we're here, I wonder if we ought to get rid of the static-ness of\nthese arrays. I realize that they're only eating a few kB, but they're\ndoing so in every postgres process, when they'll only be used in the\narchiver.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 16:04:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange path from pgarch_readyXlog()" }, { "msg_contents": "On 12/29/21, 1:04 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\r\n>> I bet this was a simple mistake in beb4e9b.\r\n>\r\n>> -static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS];\r\n>> +static char arch_filenames[NUM_FILES_PER_DIRECTORY_SCAN][MAX_XFN_CHARS + 1];\r\n>\r\n> Hm, yeah, that looks like a pretty obvious bug.\r\n>\r\n> While we're here, I wonder if we ought to get rid of the static-ness of\r\n> these arrays. I realize that they're only eating a few kB, but they're\r\n> doing so in every postgres process, when they'll only be used in the\r\n> archiver.\r\n\r\nThis crossed my mind, too. I also think one of the arrays can be\r\neliminated in favor of just using the heap (after rebuilding with a\r\nreversed comparator). Here is a minimally-tested patch that\r\ndemonstrates what I'm thinking. \r\n\r\nNathan", "msg_date": "Wed, 29 Dec 2021 22:36:45 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Strange path from pgarch_readyXlog()" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 12/29/21, 1:04 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> While we're here, I wonder if we ought to get rid of the static-ness of\n>> these arrays. I realize that they're only eating a few kB, but they're\n>> doing so in every postgres process, when they'll only be used in the\n>> archiver.\n\n> This crossed my mind, too. I also think one of the arrays can be\n> eliminated in favor of just using the heap (after rebuilding with a\n> reversed comparator). Here is a minimally-tested patch that\n> demonstrates what I'm thinking. \n\nI already pushed a patch that de-static-izes those arrays, so this\nneeds rebased at least. However, now that you mention it it does\nseem like maybe the intermediate arch_files[] array could be dropped\nin favor of just pulling the next file from the heap.\n\nThe need to reverse the heap's sort order seems like a problem\nthough. I really dislike the kluge you used here with a static flag\nthat inverts the comparator's sort order behind the back of the\nbinary-heap mechanism. It seems quite accidental that that doesn't\nfall foul of asserts or optimizations in binaryheap.c. For\ninstance, if binaryheap_build decided it needn't do anything when\nbh_has_heap_property is already true, this code would fail. In any\ncase, we'd need to spend O(n) time inverting the heap's sort order,\nso this'd likely be slower than the current code.\n\nOn the whole I'm inclined not to bother trying to optimize this\nfurther. The main thing that concerned me is that somebody would\nbump up NUM_FILES_PER_DIRECTORY_SCAN to the point where the static\nspace consumption becomes really problematic, and we've fixed that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 18:11:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange path from pgarch_readyXlog()" }, { "msg_contents": "On 12/29/21, 3:11 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\r\n>> This crossed my mind, too. I also think one of the arrays can be\r\n>> eliminated in favor of just using the heap (after rebuilding with a\r\n>> reversed comparator). Here is a minimally-tested patch that\r\n>> demonstrates what I'm thinking.\r\n>\r\n> I already pushed a patch that de-static-izes those arrays, so this\r\n> needs rebased at least. However, now that you mention it it does\r\n> seem like maybe the intermediate arch_files[] array could be dropped\r\n> in favor of just pulling the next file from the heap.\r\n>\r\n> The need to reverse the heap's sort order seems like a problem\r\n> though. I really dislike the kluge you used here with a static flag\r\n> that inverts the comparator's sort order behind the back of the\r\n> binary-heap mechanism. It seems quite accidental that that doesn't\r\n> fall foul of asserts or optimizations in binaryheap.c. For\r\n> instance, if binaryheap_build decided it needn't do anything when\r\n> bh_has_heap_property is already true, this code would fail. In any\r\n> case, we'd need to spend O(n) time inverting the heap's sort order,\r\n> so this'd likely be slower than the current code.\r\n>\r\n> On the whole I'm inclined not to bother trying to optimize this\r\n> further. The main thing that concerned me is that somebody would\r\n> bump up NUM_FILES_PER_DIRECTORY_SCAN to the point where the static\r\n> space consumption becomes really problematic, and we've fixed that.\r\n\r\nYour assessment seems reasonable to me. If there was a better way to\r\nadjust the comparator for the heap, maybe there would be a stronger\r\ncase for this approach. I certainly don't think it's worth inventing\r\nsomething for just this use-case.\r\n\r\nThanks for fixing this!\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 30 Dec 2021 16:50:53 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Strange path from pgarch_readyXlog()" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that below critical replication state changes are DEBUG1 level\nlogged. Any concern about changing the below two messages log level to LOG?\nIf this is too verbose, we can introduce a new GUC,\nlog_replication_state_changes that logs the replication state changes when\nenabled irrespective of the log level.\n\n1/\n\n/*\n * If we're in catchup state, move to streaming. This is an\n * important state change for users to know about, since before\n * this point data loss might occur if the primary dies and we\n * need to failover to the standby. The state change is also\n * important for synchronous replication, since commits that\n * started to wait at that point might wait for some time.\n */\nif (MyWalSnd->state == WALSNDSTATE_CATCHUP)\n{\nereport(DEBUG1,\n(errmsg_internal(\"\\\"%s\\\" has now caught up with upstream server\",\napplication_name)));\nWalSndSetState(WALSNDSTATE_STREAMING);\n}\n\n2/\n\nereport(DEBUG1,\n(errmsg_internal(\"standby \\\"%s\\\" now has synchronous standby priority %u\",\napplication_name, priority)));\n\n\nThanks,\nSatya\n\nHi hackers,I noticed that below critical replication state changes are DEBUG1 level logged. Any concern about changing the below two messages log level to LOG? If this is too verbose, we can introduce a new GUC, log_replication_state_changes that logs the replication state changes when enabled irrespective of the log level. 1/ /* * If we're in catchup state, move to streaming.  This is an * important state change for users to know about, since before * this point data loss might occur if the primary dies and we * need to failover to the standby. The state change is also * important for synchronous replication, since commits that * started to wait at that point might wait for some time. */if (MyWalSnd->state == WALSNDSTATE_CATCHUP){\tereport(DEBUG1,\t\t(errmsg_internal(\"\\\"%s\\\" has now caught up with upstream server\",\t\t\t\t application_name)));\t\tWalSndSetState(WALSNDSTATE_STREAMING);}2/ereport(DEBUG1,\t\t\t\t(errmsg_internal(\"standby \\\"%s\\\" now has synchronous standby priority %u\",\t\t\t\t\t\t\t\t application_name, priority)));Thanks,Satya", "msg_date": "Wed, 29 Dec 2021 13:23:49 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Logging replication state changes" }, { "msg_contents": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n> I noticed that below critical replication state changes are DEBUG1 level\n> logged. Any concern about changing the below two messages log level to LOG?\n\nWhy? These seem like perfectly routine messages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 17:04:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Logging replication state changes" }, { "msg_contents": "On Wed, Dec 29, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n> > I noticed that below critical replication state changes are DEBUG1 level\n> > logged. Any concern about changing the below two messages log level to\n> LOG?\n>\n> Why? These seem like perfectly routine messages.\n>\n\nConsider a scenario where we have a primary and two sync standby (s1 and\ns2) where s1 is a preferred failover target and s2 is next with\nsynchronous_standby_names = 'First 1 ('s1','s2')'. In an event, s1\nstreaming replication is broken and reestablished because of a planned or\nan unplanned event then s2 participates in the sync commits and makes sure\nthe writes are not stalled on the primary. I would like to know the time\nwindow where s1 is not actively acknowledging the commits and the writes\nare dependent on s2. Also if the service layer decides to failover to s2\ninstead of s1 because s1 is lagging I need evidence in the log to explain\nthe behavior.\n\n\n\n> regards, tom lane\n>\n\nOn Wed, Dec 29, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n> I noticed that below critical replication state changes are DEBUG1 level\n> logged. Any concern about changing the below two messages log level to LOG?\n\nWhy?  These seem like perfectly routine messages.Consider a scenario where we have a primary and two sync standby (s1 and s2) where s1 is a preferred failover target and s2 is next with synchronous_standby_names = 'First 1 ('s1','s2')'.  In an event, s1 streaming replication is broken and reestablished because of a planned or an unplanned event then s2 participates in the sync commits and makes sure the writes are not stalled on the primary. I would like to know the time window where s1 is not actively acknowledging the commits and the writes are dependent on s2. Also if the service layer decides to failover to s2 instead of s1 because s1 is lagging I need evidence in the log to explain the behavior. \n\n                        regards, tom lane", "msg_date": "Wed, 29 Dec 2021 14:47:59 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logging replication state changes" }, { "msg_contents": "On Thu, Dec 30, 2021 at 4:18 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> On Wed, Dec 29, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n>> > I noticed that below critical replication state changes are DEBUG1 level\n>> > logged. Any concern about changing the below two messages log level to LOG?\n>>\n>> Why? These seem like perfectly routine messages.\n>\n>\n> Consider a scenario where we have a primary and two sync standby (s1 and s2) where s1 is a preferred failover target and s2 is next with synchronous_standby_names = 'First 1 ('s1','s2')'. In an event, s1 streaming replication is broken and reestablished because of a planned or an unplanned event then s2 participates in the sync commits and makes sure the writes are not stalled on the primary. I would like to know the time window where s1 is not actively acknowledging the commits and the writes are dependent on s2. Also if the service layer decides to failover to s2 instead of s1 because s1 is lagging I need evidence in the log to explain the behavior.\n>\n\nIsn't it better to get this information via pg_stat_replication view\n(via state and sync_priority) columns?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Jan 2022 16:56:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logging replication state changes" }, { "msg_contents": "On Sat, Jan 8, 2022 at 3:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Dec 30, 2021 at 4:18 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > On Wed, Dec 29, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n> >> > I noticed that below critical replication state changes are DEBUG1\n> level\n> >> > logged. Any concern about changing the below two messages log level\n> to LOG?\n> >>\n> >> Why? These seem like perfectly routine messages.\n> >\n> >\n> > Consider a scenario where we have a primary and two sync standby (s1 and\n> s2) where s1 is a preferred failover target and s2 is next with\n> synchronous_standby_names = 'First 1 ('s1','s2')'. In an event, s1\n> streaming replication is broken and reestablished because of a planned or\n> an unplanned event then s2 participates in the sync commits and makes sure\n> the writes are not stalled on the primary. I would like to know the time\n> window where s1 is not actively acknowledging the commits and the writes\n> are dependent on s2. Also if the service layer decides to failover to s2\n> instead of s1 because s1 is lagging I need evidence in the log to explain\n> the behavior.\n> >\n>\n> Isn't it better to get this information via pg_stat_replication view\n> (via state and sync_priority) columns?\n>\n\nWe need the historical information to analyze and root cause in addition to\nthe live debugging. It would be good to have better control over\nreplication messages.\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nOn Sat, Jan 8, 2022 at 3:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Dec 30, 2021 at 4:18 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> On Wed, Dec 29, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> writes:\n>> > I noticed that below critical replication state changes are DEBUG1 level\n>> > logged. Any concern about changing the below two messages log level to LOG?\n>>\n>> Why?  These seem like perfectly routine messages.\n>\n>\n> Consider a scenario where we have a primary and two sync standby (s1 and s2) where s1 is a preferred failover target and s2 is next with synchronous_standby_names = 'First 1 ('s1','s2')'.  In an event, s1 streaming replication is broken and reestablished because of a planned or an unplanned event then s2 participates in the sync commits and makes sure the writes are not stalled on the primary. I would like to know the time window where s1 is not actively acknowledging the commits and the writes are dependent on s2. Also if the service layer decides to failover to s2 instead of s1 because s1 is lagging I need evidence in the log to explain the behavior.\n>\n\nIsn't it better to get this information via pg_stat_replication view\n(via state and sync_priority) columns?We need the historical information to analyze and root cause in addition to the live debugging. It would be good to have better control over replication messages. \n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 8 Jan 2022 13:29:30 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logging replication state changes" }, { "msg_contents": "On Sat, Jan 8, 2022, at 6:29 PM, SATYANARAYANA NARLAPURAM wrote:\n> We need the historical information to analyze and root cause in addition to the live debugging. It would be good to have better control over replication messages. \nI think the answer to this demand is not to change the message level but to\nimplement a per-module log_min_messages. This idea is in the TODO [1] for more\nthan a decade. Check the archives.\n\nI agree with Tom that the referred messages are noisy, hence, DEBUG1 is fine\nfor it.\n\n[1] https://wiki.postgresql.org/wiki/Todo\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Jan 8, 2022, at 6:29 PM, SATYANARAYANA NARLAPURAM wrote:We need the historical information to analyze and root cause in addition to the live debugging. It would be good to have better control over replication messages. I think the answer to this demand is not to change the message level but toimplement a per-module log_min_messages. This idea is in the TODO [1] for morethan a decade. Check the archives.I agree with Tom that the referred messages are noisy, hence, DEBUG1 is finefor it.[1] https://wiki.postgresql.org/wiki/Todo--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sat, 08 Jan 2022 23:55:44 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Logging replication state changes" } ]
[ { "msg_contents": "Hi,\n\nThe Examples section in the documentation for the SELECT command [1]\nonly contains a single example on how to join two tables,\nwhich is written in SQL-89 style:\n\nSELECT f.title, f.did, d.name, f.date_prod, f.kind\n FROM distributors d, films f\n WHERE f.did = d.did\n\nI think it's good to keep this example query as it is,\nand suggest we add the following equivalent queries:\n\nSELECT f.title, f.did, d.name, f.date_prod, f.kind\n FROM distributors d\n JOIN films f ON f.did = d.did\n\nSELECT f.title, f.did, d.name, f.date_prod, f.kind\n FROM distributors d\n JOIN films f USING (did)\n\nSELECT f.title, f.did, d.name, f.date_prod, f.kind\n FROM distributors d\n NATURAL JOIN films f\n\nI also think it would be an improvement to break up the from_item below into three separate items,\nsince the optional NATURAL cannot occur in combination with ON nor USING.\n \n from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) [ AS join_using_alias ] ]\n\nSuggestion:\n\n from_item join_type from_item ON join_condition\n from_item join_type from_item USING ( join_column [, ...] ) [ AS join_using_alias ]\n from_item NATURAL join_type from_item\n \nThis would be more readable imo.\nI picked the order ON, USING, NATURAL to match the order they are described in the FROM Clause section.\n\n/Joel\n\n[1] https://www.postgresql.org/docs/current/sql-select.html\nHi,The Examples section in the documentation for the SELECT command [1]only contains a single example on how to join two tables,which is written in SQL-89 style:SELECT f.title, f.did, d.name, f.date_prod, f.kind    FROM distributors d, films f    WHERE f.did = d.didI think it's good to keep this example query as it is,and suggest we add the following equivalent queries:SELECT f.title, f.did, d.name, f.date_prod, f.kind    FROM distributors d    JOIN films f ON f.did = d.didSELECT f.title, f.did, d.name, f.date_prod, f.kind    FROM distributors d    JOIN films f USING (did)SELECT f.title, f.did, d.name, f.date_prod, f.kind    FROM distributors d    NATURAL JOIN films fI also think it would be an improvement to break up the from_item below into three separate items,since the optional NATURAL cannot occur in combination with ON nor USING.     from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) [ AS join_using_alias ] ]Suggestion:    from_item join_type from_item ON join_condition    from_item join_type from_item USING ( join_column [, ...] ) [ AS join_using_alias ]    from_item NATURAL join_type from_item This would be more readable imo.I picked the order ON, USING, NATURAL to match the order they are described in the FROM Clause section./Joel[1] https://www.postgresql.org/docs/current/sql-select.html", "msg_date": "Thu, 30 Dec 2021 00:11:26 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "SELECT documentation" }, { "msg_contents": "On Thu, Dec 30, 2021 at 12:11:26AM +0100, Joel Jacobson wrote:\n> Hi,\n> \n> The Examples section in the documentation for the SELECT command [1]\n> only contains a single example on how to join two tables,\n> which is written in SQL-89 style:\n> \n> SELECT f.title, f.did, d.name, f.date_prod, f.kind\n> FROM distributors d, films f\n> WHERE f.did = d.did\n> \n> I think it's good to keep this example query as it is,\n> and suggest we add the following equivalent queries:\n> \n> SELECT f.title, f.did, d.name, f.date_prod, f.kind\n> FROM distributors d\n> JOIN films f ON f.did = d.did\n> \n> SELECT f.title, f.did, d.name, f.date_prod, f.kind\n> FROM distributors d\n> JOIN films f USING (did)\n> \n> SELECT f.title, f.did, d.name, f.date_prod, f.kind\n> FROM distributors d\n> NATURAL JOIN films f\n\nHi, I agree we should show the more modern JOIN sytax. However, this is\njust an example, so one example should be sufficient. I went with the\nfirst one in the attached patch.\n\nShould we link to the join docs?\n\n\thttps://www.postgresql.org/docs/15/queries-table-expressions.html#QUERIES-FROM\n\nI didn't see anything additional there that would warrant a link.\n\n> I also think it would be an improvement to break up the from_item below into\n> three separate items,\n> since the optional NATURAL cannot occur in combination with ON nor USING.\n> \n> from_item [ NATURAL ] join_type from_item [ ON join_condition | USING (\n> join_column [, ...] ) [ AS join_using_alias ] ]\n\nAgreed. I am surprised this has stayed like this for so long --- it is\nconfusing.\n\n> Suggestion:\n> \n> from_item join_type from_item ON join_condition\n> from_item join_type from_item USING ( join_column [, ...] ) [ AS\n> join_using_alias ]\n> from_item NATURAL join_type from_item\n> \n> This would be more readable imo.\n> I picked the order ON, USING, NATURAL to match the order they are described in\n> the FROM Clause section.\n\nI went a different direction, since I was fine with ON/USING being a\nchoice, rather than optional. Also, CROSS JOIN can't use a join_type,\nso I split the one line into three in the attached patch, and verified\nthis from gram.y. Our join docs have this clearly shown:\n\n https://www.postgresql.org/docs/15/queries-table-expressions.html#QUERIES-FROM\n\n from_item join_type from_item { ON join_condition | USING ( join_column [, ...] ) [ AS join_using_alias ] }\n from_item NATURAL join_type from_item\n from_item CROSS JOIN from_item\n\nbut for some reason SELECT had them all mashed together. Should I\nsplit ON/USING on separate lines?\n\nYou can see the result here:\n\n\thttps://momjian.us/tmp/pgsql/sql-select.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson", "msg_date": "Sat, 13 Aug 2022 21:50:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: SELECT documentation" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Hi, I agree we should show the more modern JOIN sytax. However, this is\n> just an example, so one example should be sufficient. I went with the\n> first one in the attached patch.\n\nYou should not remove the CROSS JOIN mention at l. 604, first because\nthe references to it just below would become odd, and second because\nthen it's not explained anywhere on the page. Perhaps you could\nput back a definition of CROSS JOIN just below the entry for NATURAL,\nbut you'll still have to do something with the references at l. 614,\n628, 632.\n\nAlso, doesn't \"[ AS join_using_alias ]\" apply to NATURAL and CROSS\njoins? You've left that out of the syntax summary.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Aug 2022 22:21:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SELECT documentation" }, { "msg_contents": "On Sat, Aug 13, 2022 at 10:21:26PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Hi, I agree we should show the more modern JOIN sytax. However, this is\n> > just an example, so one example should be sufficient. I went with the\n> > first one in the attached patch.\n> \n> You should not remove the CROSS JOIN mention at l. 604, first because\n> the references to it just below would become odd, and second because\n> then it's not explained anywhere on the page. Perhaps you could\n> put back a definition of CROSS JOIN just below the entry for NATURAL,\n> but you'll still have to do something with the references at l. 614,\n> 628, 632.\n\nGood point. I restrutured the docs to move CROSS JOIN to a separate\nsection like NATURAL and adjusted the text, patch attached.\n\n> Also, doesn't \"[ AS join_using_alias ]\" apply to NATURAL and CROSS\n> joins? You've left that out of the syntax summary.\n\nUh, I only see it for USING in gram.y:\n\n\t/* JOIN qualification clauses\n\t * Possibilities are:\n\t * USING ( column list ) [ AS alias ]\n\t * allows only unqualified column names,\n\t * which must match between tables.\n\t * ON expr allows more general qualifications.\n\t *\n\t * We return USING as a two-element List (the first item being a sub-List\n\t * of the common column names, and the second either an Alias item or NULL).\n\t * An ON-expr will not be a List, so it can be told apart that way.\n\t */\n\t\n\tjoin_qual: USING '(' name_list ')' opt_alias_clause_for_join_using\n\t {\n\t $$ = (Node *) list_make2($3, $5);\n\t }\n\t | ON a_expr\n\t {\n\t $$ = $2;\n\t }\n\t ;\n\n\t...\n\n\t/*\n\t * The alias clause after JOIN ... USING only accepts the AS ColId spelling,\n\t * per SQL standard. (The grammar could parse the other variants, but they\n\t * don't seem to be useful, and it might lead to parser problems in the\n\t * future.)\n\t */\n\topt_alias_clause_for_join_using:\n\t AS ColId\n\t {\n\t $$ = makeNode(Alias);\n\t $$->aliasname = $2;\n\t /* the column name list will be inserted later */\n\t }\n\t | /*EMPTY*/ { $$ = NULL; }\n\t ;\n\nwhich is only used in:\n\n\t| table_ref join_type JOIN table_ref join_qual\n\t| table_ref JOIN table_ref join_qual\n\nI have updated my private build:\n\n\thttps://momjian.us/tmp/pgsql/sql-select.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson", "msg_date": "Mon, 15 Aug 2022 22:53:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: SELECT documentation" }, { "msg_contents": " On Mon, Aug 15, 2022 at 10:53:18PM -0400, Bruce Momjian wrote:\n> On Sat, Aug 13, 2022 at 10:21:26PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Hi, I agree we should show the more modern JOIN sytax. However, this is\n> > > just an example, so one example should be sufficient. I went with the\n> > > first one in the attached patch.\n> > \n> > You should not remove the CROSS JOIN mention at l. 604, first because\n> > the references to it just below would become odd, and second because\n> > then it's not explained anywhere on the page. Perhaps you could\n> > put back a definition of CROSS JOIN just below the entry for NATURAL,\n> > but you'll still have to do something with the references at l. 614,\n> > 628, 632.\n> \n> Good point. I restrutured the docs to move CROSS JOIN to a separate\n> section like NATURAL and adjusted the text, patch attached.\n\nPatch applied back to PG 11. PG 10 was different enough and old enough\nthat I skipped it. This is a big improvement. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 31 Aug 2022 21:47:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: SELECT documentation" } ]
[ { "msg_contents": "Hi,\n\n bool isunique;\n+ bool nulls_not_distinct;\n } BTSpool;\n\nLooking at the other fields in BTSpool, there is no underscore in field\nname.\nI think the new field can be named nullsdistinct. This way, the\ndouble negative is avoided.\n\nSimilar comment for new fields in BTShared and BTLeader.\n\nAnd the naming would be consistent with information_schema.sql where\nnulls_distinct is used:\n\n+ CAST('YES' AS yes_or_no) AS enforced,\n+ CAST(NULL AS yes_or_no) AS nulls_distinct\n\nCheers\n\nHi,    bool        isunique;+   bool        nulls_not_distinct; } BTSpool;Looking at the other fields in BTSpool, there is no underscore in field name.I think the new field can be named nullsdistinct. This way, the double negative is avoided.Similar comment for new fields in BTShared and BTLeader.And the naming would be consistent with information_schema.sql where nulls_distinct is used:+           CAST('YES' AS yes_or_no) AS enforced,+           CAST(NULL AS yes_or_no) AS nulls_distinctCheers", "msg_date": "Wed, 29 Dec 2021 15:27:57 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: UNIQUE null treatment option" } ]
[ { "msg_contents": "Hi,\n\nWith unlucky scheduling you can get a failure like this:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-12-22%2010%3A51%3A32\n\nSuggested fix attached.", "msg_date": "Thu, 30 Dec 2021 15:12:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Tests \"with\" and \"alter_table\" suffer from name clash" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> With unlucky scheduling you can get a failure like this:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-12-22%2010%3A51%3A32\n> Suggested fix attached.\n\nLooks reasonable. We really should avoid using such common\nnames for short-lived tables in any case --- it's an invitation\nto trouble. So I'd vote for changing the other use of \"test\", too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 21:27:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tests \"with\" and \"alter_table\" suffer from name clash" }, { "msg_contents": "On Thu, Dec 30, 2021 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Looks reasonable. We really should avoid using such common\n> names for short-lived tables in any case --- it's an invitation\n> to trouble. So I'd vote for changing the other use of \"test\", too.\n\nIn fact only REL_10_STABLE had the problem, because commit 2cf8c7aa\nalready fixed the other instance in later branches. I'd entirely\nforgotten that earlier discussion, which apparently didn't quite go\nfar enough. So I only needed to push the with.sql change. Done.\n\n\n", "msg_date": "Thu, 30 Dec 2021 17:25:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tests \"with\" and \"alter_table\" suffer from name clash" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> In fact only REL_10_STABLE had the problem, because commit 2cf8c7aa\n> already fixed the other instance in later branches. I'd entirely\n> forgotten that earlier discussion, which apparently didn't quite go\n> far enough. So I only needed to push the with.sql change. Done.\n\nHah, I thought this felt familiar. So the real problem is that\nmy backpatch (b15a8c963) only fixed half of the hazard. Sigh.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Dec 2021 23:50:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tests \"with\" and \"alter_table\" suffer from name clash" } ]
[ { "msg_contents": "Hello,\n\nI've been reading the autovacuum code (the launcher and the worker) on the\n14 branch. As previously, I've seen some configuration at the beginning,\nespecially for statement_timeout, lock_timeout and\nidle_in_transaction_session_timeout, and I was surprised to discover there\nwas no configuration for idle_session_timeout. I'm not sure the code should\nset it to 0 as well (otherwise I'd have written a patch), but, if there was\na decision made to ignore its value, I'd be interested to know the reason.\nI could guess for the autovacuum worker (it seems to work in a transaction,\nso it's already handled by the idle_in_transaction_timeout), but I have no\nidea for the autovacuum launcher.\n\nIf it was just missed, I could write a patch this week to fix this.\n\nRegards.\n\n\n-- \nGuillaume.\n\nHello,I've been reading the autovacuum code (the launcher and the worker) on the 14 branch. As previously, I've seen some configuration at the beginning, especially for statement_timeout, lock_timeout and idle_in_transaction_session_timeout, and I was surprised to discover there was no configuration for idle_session_timeout. I'm not sure the code should set it to 0 as well (otherwise I'd have written a patch), but, if there was a decision made to ignore its value, I'd be interested to know the reason. I could guess for the autovacuum worker (it seems to work in a transaction, so it's already handled by the idle_in_transaction_timeout), but I have no idea for the autovacuum launcher.If it was just missed, I could write a patch this week to fix this.Regards.-- Guillaume.", "msg_date": "Thu, 30 Dec 2021 10:18:37 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Autovacuum and idle_session_timeout" }, { "msg_contents": "\nOn Thu, 30 Dec 2021 at 17:18, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n> Hello,\n>\n> I've been reading the autovacuum code (the launcher and the worker) on the\n> 14 branch. As previously, I've seen some configuration at the beginning,\n> especially for statement_timeout, lock_timeout and\n> idle_in_transaction_session_timeout, and I was surprised to discover there\n> was no configuration for idle_session_timeout. I'm not sure the code should\n> set it to 0 as well (otherwise I'd have written a patch), but, if there was\n> a decision made to ignore its value, I'd be interested to know the reason.\n> I could guess for the autovacuum worker (it seems to work in a transaction,\n> so it's already handled by the idle_in_transaction_timeout), but I have no\n> idea for the autovacuum launcher.\n>\n> If it was just missed, I could write a patch this week to fix this.\n>\n\nOh, it was just missed. I didn't note set autovacuum code set those settings,\nI think we should also set idle_session_timeout to 0.\n\nShould we also change this for pg_dump and pg_backup_archiver?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 30 Dec 2021 18:44:23 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "Le jeu. 30 déc. 2021 à 11:44, Japin Li <japinli@hotmail.com> a écrit :\n\n>\n> On Thu, 30 Dec 2021 at 17:18, Guillaume Lelarge <guillaume@lelarge.info>\n> wrote:\n> > Hello,\n> >\n> > I've been reading the autovacuum code (the launcher and the worker) on\n> the\n> > 14 branch. As previously, I've seen some configuration at the beginning,\n> > especially for statement_timeout, lock_timeout and\n> > idle_in_transaction_session_timeout, and I was surprised to discover\n> there\n> > was no configuration for idle_session_timeout. I'm not sure the code\n> should\n> > set it to 0 as well (otherwise I'd have written a patch), but, if there\n> was\n> > a decision made to ignore its value, I'd be interested to know the\n> reason.\n> > I could guess for the autovacuum worker (it seems to work in a\n> transaction,\n> > so it's already handled by the idle_in_transaction_timeout), but I have\n> no\n> > idea for the autovacuum launcher.\n> >\n> > If it was just missed, I could write a patch this week to fix this.\n> >\n>\n> Oh, it was just missed. I didn't note set autovacuum code set those\n> settings,\n> I think we should also set idle_session_timeout to 0.\n>\n> Should we also change this for pg_dump and pg_backup_archiver?\n>\n>\npg_dump works in a single transaction, so it's already dealt with\nidle_in_transaction_timeout. Though I guess setting both would work too.\n\n\n-- \nGuillaume.\n\nLe jeu. 30 déc. 2021 à 11:44, Japin Li <japinli@hotmail.com> a écrit :\nOn Thu, 30 Dec 2021 at 17:18, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n> Hello,\n>\n> I've been reading the autovacuum code (the launcher and the worker) on the\n> 14 branch. As previously, I've seen some configuration at the beginning,\n> especially for statement_timeout, lock_timeout and\n> idle_in_transaction_session_timeout, and I was surprised to discover there\n> was no configuration for idle_session_timeout. I'm not sure the code should\n> set it to 0 as well (otherwise I'd have written a patch), but, if there was\n> a decision made to ignore its value, I'd be interested to know the reason.\n> I could guess for the autovacuum worker (it seems to work in a transaction,\n> so it's already handled by the idle_in_transaction_timeout), but I have no\n> idea for the autovacuum launcher.\n>\n> If it was just missed, I could write a patch this week to fix this.\n>\n\nOh, it was just missed. I didn't note set autovacuum code set those settings,\nI think we should also set idle_session_timeout to 0.\n\nShould we also change this for pg_dump and pg_backup_archiver?\npg_dump works in a single transaction, so it's already dealt with idle_in_transaction_timeout. Though I guess setting both would work too.-- Guillaume.", "msg_date": "Thu, 30 Dec 2021 11:53:37 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n> Le jeu. 30 déc. 2021 à 11:44, Japin Li <japinli@hotmail.com> a écrit :\n>\n>>\n> pg_dump works in a single transaction, so it's already dealt with\n> idle_in_transaction_timeout. Though I guess setting both would work too.\n\nAttached fix this, please consider reveiew it. Thanks.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 30 Dec 2021 19:01:22 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "Le jeu. 30 déc. 2021 à 12:01, Japin Li <japinli@hotmail.com> a écrit :\n\n>\n> On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info>\n> wrote:\n> > Le jeu. 30 déc. 2021 à 11:44, Japin Li <japinli@hotmail.com> a écrit :\n> >\n> >>\n> > pg_dump works in a single transaction, so it's already dealt with\n> > idle_in_transaction_timeout. Though I guess setting both would work too.\n>\n> Attached fix this, please consider reveiew it. Thanks.\n>\n>\nI've read it and it really looks like what I would have done. Sounds good\nto me.\n\n\n-- \nGuillaume.\n\nLe jeu. 30 déc. 2021 à 12:01, Japin Li <japinli@hotmail.com> a écrit :\nOn Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n> Le jeu. 30 déc. 2021 à 11:44, Japin Li <japinli@hotmail.com> a écrit :\n>\n>>\n> pg_dump works in a single transaction, so it's already dealt with\n> idle_in_transaction_timeout. Though I guess setting both would work too.\n\nAttached fix this, please consider reveiew it.  Thanks.\nI've read it and it really looks like what I would have done. Sounds good to me.-- Guillaume.", "msg_date": "Thu, 30 Dec 2021 14:18:42 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n>> pg_dump works in a single transaction, so it's already dealt with\n>> idle_in_transaction_timeout. Though I guess setting both would work too.\n\n> Attached fix this, please consider reveiew it. Thanks.\n\nThis seems rather pointless to me. The idle-session timeout is only\nactivated in PostgresMain's input loop, so it will never be reached\nin autovacuum or other background workers. (The same is true for\nidle_in_transaction_session_timeout, so the fact that somebody made\nautovacuum.c clear that looks like cargo-cult programming from here,\nnot useful code.) And as for pg_dump, how would it ever trigger the\ntimeout? It's not going to sit there thinking, especially not\noutside a transaction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Dec 2021 11:24:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "Le jeu. 30 déc. 2021 à 17:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Japin Li <japinli@hotmail.com> writes:\n> > On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info>\n> wrote:\n> >> pg_dump works in a single transaction, so it's already dealt with\n> >> idle_in_transaction_timeout. Though I guess setting both would work too.\n>\n> > Attached fix this, please consider reveiew it. Thanks.\n>\n> This seems rather pointless to me. The idle-session timeout is only\n> activated in PostgresMain's input loop, so it will never be reached\n> in autovacuum or other background workers. (The same is true for\n> idle_in_transaction_session_timeout, so the fact that somebody made\n> autovacuum.c clear that looks like cargo-cult programming from here,\n> not useful code.) And as for pg_dump, how would it ever trigger the\n> timeout? It's not going to sit there thinking, especially not\n> outside a transaction.\n>\n>\nAgreed. It makes more sense. So no need for the patch. Thanks to both.\n\n\n-- \nGuillaume.\n\nLe jeu. 30 déc. 2021 à 17:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Japin Li <japinli@hotmail.com> writes:\n> On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n>> pg_dump works in a single transaction, so it's already dealt with\n>> idle_in_transaction_timeout. Though I guess setting both would work too.\n\n> Attached fix this, please consider reveiew it.  Thanks.\n\nThis seems rather pointless to me.  The idle-session timeout is only\nactivated in PostgresMain's input loop, so it will never be reached\nin autovacuum or other background workers.  (The same is true for\nidle_in_transaction_session_timeout, so the fact that somebody made\nautovacuum.c clear that looks like cargo-cult programming from here,\nnot useful code.)  And as for pg_dump, how would it ever trigger the\ntimeout?  It's not going to sit there thinking, especially not\noutside a transaction.\nAgreed. It makes more sense. So no need for the patch. Thanks to both.-- Guillaume.", "msg_date": "Thu, 30 Dec 2021 18:01:59 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Autovacuum and idle_session_timeout" }, { "msg_contents": "\nOn Fri, 31 Dec 2021 at 00:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Thu, 30 Dec 2021 at 18:53, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n>>> pg_dump works in a single transaction, so it's already dealt with\n>>> idle_in_transaction_timeout. Though I guess setting both would work too.\n>\n>> Attached fix this, please consider reveiew it. Thanks.\n>\n> This seems rather pointless to me. The idle-session timeout is only\n> activated in PostgresMain's input loop, so it will never be reached\n> in autovacuum or other background workers. (The same is true for\n> idle_in_transaction_session_timeout, so the fact that somebody made\n> autovacuum.c clear that looks like cargo-cult programming from here,\n> not useful code.) And as for pg_dump, how would it ever trigger the\n> timeout? It's not going to sit there thinking, especially not\n> outside a transaction.\n>\n\nThanks for your clarify! If the timeout never be reached, should we remove\nthose settings?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 31 Dec 2021 10:01:47 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Autovacuum and idle_session_timeout" } ]
[ { "msg_contents": "Good day, hackers.\n\nProblem:\n- Append path is created with explicitely parallel_aware = true\n- It has two child, one is trivial, other is parallel_aware = false .\n Trivial child is dropped.\n- Gather/GatherMerge path takes Append path as a child and thinks\n its child is parallel_aware = true.\n- But Append path is removed at the last since it has only one child.\n- Now Gather/GatherMerge thinks its child is parallel_aware, but it\n is not.\n Gather/GatherMerge runs its child twice: in a worker and in a leader,\n and gathers same rows twice.\n\nReproduction code attached (repro.sql. Included as a test as well).\n\nSuggested quick (and valid) fix in the patch attached:\n- If Append has single child, then copy its parallel awareness.\n\nBug were introduced with commit 8edd0e79460b414b1d971895312e549e95e12e4f\n\"Suppress Append and MergeAppend plan nodes that have a single child.\"\n\nDuring discussion, it were supposed [1] those fields should be copied:\n\n> I haven't looked into whether this does the right things for parallel\n> planning --- possibly create_[merge]append_path need to propagate up\n> parallel-related path fields from the single child?\n\nBut it were not so obvious [2].\n\nBetter fix could contain removing Gather/GatherMerge node as well if\nits child is not parallel aware.\n\nBug is reported in https://postgr.es/m/flat/17335-4dc92e1aea3a78af%40postgresql.org\nSince no way to add thread from pgsql-bugs to commitfest, I write here.\n\n[1] https://postgr.es/m/17500.1551669976%40sss.pgh.pa.us\n[2] https://postgr.es/m/CAKJS1f_Wt_tL3S32R3wpU86zQjuHfbnZbFt0eqm%3DqcRFcdbLvw%40mail.gmail.com\n\n---- \nregards\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com", "msg_date": "Thu, 30 Dec 2021 14:14:32 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Thu, Dec 30, 2021 at 4:44 PM Yura Sokolov <y.sokolov@postgrespro.ru>\nwrote:\n\n> Good day, hackers.\n>\n> Problem:\n> - Append path is created with explicitely parallel_aware = true\n> - It has two child, one is trivial, other is parallel_aware = false .\n> Trivial child is dropped.\n> - Gather/GatherMerge path takes Append path as a child and thinks\n> its child is parallel_aware = true.\n> - But Append path is removed at the last since it has only one child.\n> - Now Gather/GatherMerge thinks its child is parallel_aware, but it\n> is not.\n> Gather/GatherMerge runs its child twice: in a worker and in a leader,\n> and gathers same rows twice.\n>\n> Reproduction code attached (repro.sql. Included as a test as well).\n>\n\nYeah, this is a problem.\n\n\n>\n> Suggested quick (and valid) fix in the patch attached:\n> - If Append has single child, then copy its parallel awareness.\n>\n> Bug were introduced with commit 8edd0e79460b414b1d971895312e549e95e12e4f\n> \"Suppress Append and MergeAppend plan nodes that have a single child.\"\n>\n> During discussion, it were supposed [1] those fields should be copied:\n>\n> > I haven't looked into whether this does the right things for parallel\n> > planning --- possibly create_[merge]append_path need to propagate up\n> > parallel-related path fields from the single child?\n>\n> But it were not so obvious [2].\n>\n> Better fix could contain removing Gather/GatherMerge node as well if\n> its child is not parallel aware.\n>\n\nThe Gather path will only be created if we have an underlying partial path,\nso I think if we are generating the append path only from the non-partial\npaths then we can see if the number of child nodes is just 1 then don't\ngenerate the partial append path, so from that you will node generate the\npartial join and eventually gather will be avoided.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Dec 30, 2021 at 4:44 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:Good day, hackers.\n\nProblem:\n- Append path is created with explicitely parallel_aware = true\n- It has two child, one is trivial, other is parallel_aware = false .\n  Trivial child is dropped.\n- Gather/GatherMerge path takes Append path as a child and thinks\n  its child is parallel_aware = true.\n- But Append path is removed at the last since it has only one child.\n- Now Gather/GatherMerge thinks its child is parallel_aware, but it\n  is not.\n  Gather/GatherMerge runs its child twice: in a worker and in a leader,\n  and gathers same rows twice.\n\nReproduction code attached (repro.sql. Included as a test as well).Yeah, this is a problem. \n\nSuggested quick (and valid) fix in the patch attached:\n- If Append has single child, then copy its parallel awareness.\n\nBug were introduced with commit 8edd0e79460b414b1d971895312e549e95e12e4f\n\"Suppress Append and MergeAppend plan nodes that have a single child.\"\n\nDuring discussion, it were supposed [1] those fields should be copied:\n\n> I haven't looked into whether this does the right things for parallel\n> planning --- possibly create_[merge]append_path need to propagate up\n> parallel-related path fields from the single child?\n\nBut it were not so obvious [2].\n\nBetter fix could contain removing Gather/GatherMerge node as well if\nits child is not parallel aware.\nThe Gather path will only be created if we have an underlying partial path, so I think if we are generating the append path only from the non-partial paths then we can see if the number of child nodes is just 1 then don't generate the partial append path, so from that you will node generate the partial join and eventually gather will be avoided.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Dec 2021 17:59:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> Problem:\n> - Append path is created with explicitely parallel_aware = true\n> - It has two child, one is trivial, other is parallel_aware = false .\n> Trivial child is dropped.\n> - Gather/GatherMerge path takes Append path as a child and thinks\n> its child is parallel_aware = true.\n> - But Append path is removed at the last since it has only one child.\n> - Now Gather/GatherMerge thinks its child is parallel_aware, but it\n> is not.\n> Gather/GatherMerge runs its child twice: in a worker and in a leader,\n> and gathers same rows twice.\n\nThanks for the report. I can confirm that I can recreate the problem\nwith your script.\n\nI will look into this further later next week.\n\nDavid\n\n\n", "msg_date": "Sat, 1 Jan 2022 15:19:18 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "В Сб, 01/01/2022 в 15:19 +1300, David Rowley пишет:\n> On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > Problem:\n> > - Append path is created with explicitely parallel_aware = true\n> > - It has two child, one is trivial, other is parallel_aware = false .\n> > Trivial child is dropped.\n> > - Gather/GatherMerge path takes Append path as a child and thinks\n> > its child is parallel_aware = true.\n> > - But Append path is removed at the last since it has only one child.\n> > - Now Gather/GatherMerge thinks its child is parallel_aware, but it\n> > is not.\n> > Gather/GatherMerge runs its child twice: in a worker and in a leader,\n> > and gathers same rows twice.\n> \n> Thanks for the report. I can confirm that I can recreate the problem\n> with your script.\n> \n> I will look into this further later next week.\n> \n\nGood day, David.\n\nExcuse me for disturbing.\nAny update on this?\nAny chance to be fixed in next minor release?\nCould this simple fix be merged before further improvements?\n\nYura.\n\n\n\n", "msg_date": "Mon, 17 Jan 2022 10:49:05 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> Suggested quick (and valid) fix in the patch attached:\n> - If Append has single child, then copy its parallel awareness.\n\nI've been looking at this and I've gone through changing my mind about\nwhat's the right fix quite a number of times.\n\nMy current thoughts are that I don't really like the fact that we can\nhave plans in the following shape:\n\n Finalize Aggregate\n -> Gather\n Workers Planned: 1\n -> Partial Aggregate\n -> Parallel Hash Left Join\n Hash Cond: (gather_append_1.fk = gather_append_2.fk)\n -> Index Scan using gather_append_1_ix on gather_append_1\n Index Cond: (f = true)\n -> Parallel Hash\n -> Parallel Seq Scan on gather_append_2\n\nIt's only made safe by the fact that Gather will only use 1 worker.\nTo me, it just seems too fragile to assume that's always going to be\nthe case. I feel like this fix just relies on the fact that\ncreate_gather_path() and create_gather_merge_path() do\n\"pathnode->num_workers = subpath->parallel_workers;\". If someone\ndecided that was to work a different way, then we risk this breaking\nagain. Additionally, today we have Gather and GatherMerge, but we may\none day end up with more node types that gather results from parallel\nworkers, or even a completely different way of executing plans.\n\nI think a safer way to fix this is to just not remove the\nAppend/MergeAppend node if the parallel_aware flag of the only-child\nand the Append/MergeAppend don't match. I've done that in the\nattached.\n\nI believe the code at the end of add_paths_to_append_rel() can remain as is.\n\nDavid", "msg_date": "Thu, 20 Jan 2022 09:32:17 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "В Чт, 20/01/2022 в 09:32 +1300, David Rowley пишет:\n> On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > Suggested quick (and valid) fix in the patch attached:\n> > - If Append has single child, then copy its parallel awareness.\n> \n> I've been looking at this and I've gone through changing my mind about\n> what's the right fix quite a number of times.\n> \n> My current thoughts are that I don't really like the fact that we can\n> have plans in the following shape:\n> \n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 1\n> -> Partial Aggregate\n> -> Parallel Hash Left Join\n> Hash Cond: (gather_append_1.fk = gather_append_2.fk)\n> -> Index Scan using gather_append_1_ix on gather_append_1\n> Index Cond: (f = true)\n> -> Parallel Hash\n> -> Parallel Seq Scan on gather_append_2\n> \n> It's only made safe by the fact that Gather will only use 1 worker.\n> To me, it just seems too fragile to assume that's always going to be\n> the case. I feel like this fix just relies on the fact that\n> create_gather_path() and create_gather_merge_path() do\n> \"pathnode->num_workers = subpath->parallel_workers;\". If someone\n> decided that was to work a different way, then we risk this breaking\n> again. Additionally, today we have Gather and GatherMerge, but we may\n> one day end up with more node types that gather results from parallel\n> workers, or even a completely different way of executing plans.\n\nIt seems strange parallel_aware and parallel_safe flags neither affect\nexecution nor are properly checked.\n\nExcept parallel_safe is checked in ExecSerializePlan which is called from\nExecInitParallelPlan, which is called from ExecGather and ExecGatherMerge.\nBut looks like this check doesn't affect execution as well.\n\n> \n> I think a safer way to fix this is to just not remove the\n> Append/MergeAppend node if the parallel_aware flag of the only-child\n> and the Append/MergeAppend don't match. I've done that in the\n> attached.\n> \n> I believe the code at the end of add_paths_to_append_rel() can remain as is.\n\nI found clean_up_removed_plan_level also called from set_subqueryscan_references.\nIs there a need to patch there as well?\n\nAnd there is strange state:\n- in the loop by subpaths, pathnode->node.parallel_safe is set to AND of\n all its subpath's parallel_safe\n (therefore there were need to copy it in my patch version),\n- that means, our AppendPath is parallel_aware but not parallel_safe.\nIt is ridiculous a bit.\n\nAnd it is strange AppendPath could have more parallel_workers than sum of\nits children parallel_workers.\n\nSo it looks like whole machinery around parallel_aware/parallel_safe has\nno enough consistency.\n\nEither way, I attach you version of fix with my tests as new patch version.\n\nregards,\nYura Sokolov", "msg_date": "Sun, 23 Jan 2022 14:56:46 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "В Вс, 23/01/2022 в 14:56 +0300, Yura Sokolov пишет:\n> В Чт, 20/01/2022 в 09:32 +1300, David Rowley пишет:\n> > On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > Suggested quick (and valid) fix in the patch attached:\n> > > - If Append has single child, then copy its parallel awareness.\n> > \n> > I've been looking at this and I've gone through changing my mind about\n> > what's the right fix quite a number of times.\n> > \n> > My current thoughts are that I don't really like the fact that we can\n> > have plans in the following shape:\n> > \n> > Finalize Aggregate\n> > -> Gather\n> > Workers Planned: 1\n> > -> Partial Aggregate\n> > -> Parallel Hash Left Join\n> > Hash Cond: (gather_append_1.fk = gather_append_2.fk)\n> > -> Index Scan using gather_append_1_ix on gather_append_1\n> > Index Cond: (f = true)\n> > -> Parallel Hash\n> > -> Parallel Seq Scan on gather_append_2\n> > \n> > It's only made safe by the fact that Gather will only use 1 worker.\n> > To me, it just seems too fragile to assume that's always going to be\n> > the case. I feel like this fix just relies on the fact that\n> > create_gather_path() and create_gather_merge_path() do\n> > \"pathnode->num_workers = subpath->parallel_workers;\". If someone\n> > decided that was to work a different way, then we risk this breaking\n> > again. Additionally, today we have Gather and GatherMerge, but we may\n> > one day end up with more node types that gather results from parallel\n> > workers, or even a completely different way of executing plans.\n> \n> It seems strange parallel_aware and parallel_safe flags neither affect\n> execution nor are properly checked.\n> \n> Except parallel_safe is checked in ExecSerializePlan which is called from\n> ExecInitParallelPlan, which is called from ExecGather and ExecGatherMerge.\n> But looks like this check doesn't affect execution as well.\n> \n> > I think a safer way to fix this is to just not remove the\n> > Append/MergeAppend node if the parallel_aware flag of the only-child\n> > and the Append/MergeAppend don't match. I've done that in the\n> > attached.\n> > \n> > I believe the code at the end of add_paths_to_append_rel() can remain as is.\n> \n> I found clean_up_removed_plan_level also called from set_subqueryscan_references.\n> Is there a need to patch there as well?\n> \n> And there is strange state:\n> - in the loop by subpaths, pathnode->node.parallel_safe is set to AND of\n> all its subpath's parallel_safe\n> (therefore there were need to copy it in my patch version),\n> - that means, our AppendPath is parallel_aware but not parallel_safe.\n> It is ridiculous a bit.\n> \n> And it is strange AppendPath could have more parallel_workers than sum of\n> its children parallel_workers.\n> \n> So it looks like whole machinery around parallel_aware/parallel_safe has\n> no enough consistency.\n> \n> Either way, I attach you version of fix with my tests as new patch version.\n\nLooks like volatile \"Memory Usage:\" in EXPLAIN brokes 'make check'\nsporadically.\n\nApplied replacement in style of memoize.sql test.\n\nWhy there is no way to disable \"Buckets: %d Buffers: %d Memory Usage: %dkB\"\noutput in show_hash_info?\n\nregards,\nYura Sokolov", "msg_date": "Mon, 24 Jan 2022 16:24:29 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "В Пн, 24/01/2022 в 16:24 +0300, Yura Sokolov пишет:\n> В Вс, 23/01/2022 в 14:56 +0300, Yura Sokolov пишет:\n> > В Чт, 20/01/2022 в 09:32 +1300, David Rowley пишет:\n> > > On Fri, 31 Dec 2021 at 00:14, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > > Suggested quick (and valid) fix in the patch attached:\n> > > > - If Append has single child, then copy its parallel awareness.\n> > > \n> > > I've been looking at this and I've gone through changing my mind about\n> > > what's the right fix quite a number of times.\n> > > \n> > > My current thoughts are that I don't really like the fact that we can\n> > > have plans in the following shape:\n> > > \n> > > Finalize Aggregate\n> > > -> Gather\n> > > Workers Planned: 1\n> > > -> Partial Aggregate\n> > > -> Parallel Hash Left Join\n> > > Hash Cond: (gather_append_1.fk = gather_append_2.fk)\n> > > -> Index Scan using gather_append_1_ix on gather_append_1\n> > > Index Cond: (f = true)\n> > > -> Parallel Hash\n> > > -> Parallel Seq Scan on gather_append_2\n> > > \n> > > It's only made safe by the fact that Gather will only use 1 worker.\n> > > To me, it just seems too fragile to assume that's always going to be\n> > > the case. I feel like this fix just relies on the fact that\n> > > create_gather_path() and create_gather_merge_path() do\n> > > \"pathnode->num_workers = subpath->parallel_workers;\". If someone\n> > > decided that was to work a different way, then we risk this breaking\n> > > again. Additionally, today we have Gather and GatherMerge, but we may\n> > > one day end up with more node types that gather results from parallel\n> > > workers, or even a completely different way of executing plans.\n> > \n> > It seems strange parallel_aware and parallel_safe flags neither affect\n> > execution nor are properly checked.\n> > \n> > Except parallel_safe is checked in ExecSerializePlan which is called from\n> > ExecInitParallelPlan, which is called from ExecGather and ExecGatherMerge.\n> > But looks like this check doesn't affect execution as well.\n> > \n> > > I think a safer way to fix this is to just not remove the\n> > > Append/MergeAppend node if the parallel_aware flag of the only-child\n> > > and the Append/MergeAppend don't match. I've done that in the\n> > > attached.\n> > > \n> > > I believe the code at the end of add_paths_to_append_rel() can remain as is.\n> > \n> > I found clean_up_removed_plan_level also called from set_subqueryscan_references.\n> > Is there a need to patch there as well?\n> > \n> > And there is strange state:\n> > - in the loop by subpaths, pathnode->node.parallel_safe is set to AND of\n> > all its subpath's parallel_safe\n> > (therefore there were need to copy it in my patch version),\n> > - that means, our AppendPath is parallel_aware but not parallel_safe.\n> > It is ridiculous a bit.\n> > \n> > And it is strange AppendPath could have more parallel_workers than sum of\n> > its children parallel_workers.\n> > \n> > So it looks like whole machinery around parallel_aware/parallel_safe has\n> > no enough consistency.\n> > \n> > Either way, I attach you version of fix with my tests as new patch version.\n> \n> Looks like volatile \"Memory Usage:\" in EXPLAIN brokes 'make check'\n> sporadically.\n> \n> Applied replacement in style of memoize.sql test.\n> \n> Why there is no way to disable \"Buckets: %d Buffers: %d Memory Usage: %dkB\"\n> output in show_hash_info?\n\nAnd another attempt to fix tests volatility.", "msg_date": "Tue, 25 Jan 2022 07:35:45 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Tue, 25 Jan 2022 at 17:35, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> And another attempt to fix tests volatility.\n\nFWIW, I had not really seen the point in adding a test for this. I\ndid however see a point in it with your original patch. It seemed\nuseful there to verify that Gather and GatherMerge did what we\nexpected with 1 worker.\n\nDavid\n\n\n", "msg_date": "Tue, 25 Jan 2022 20:03:07 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Tue, 25 Jan 2022 at 20:03, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 25 Jan 2022 at 17:35, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > And another attempt to fix tests volatility.\n>\n> FWIW, I had not really seen the point in adding a test for this. I\n> did however see a point in it with your original patch. It seemed\n> useful there to verify that Gather and GatherMerge did what we\n> expected with 1 worker.\n\nI ended up pushing just the last patch I sent.\n\nThe reason I didn't think it was worth adding a new test was that no\ntests were added in the original commit. Existing tests did cover it,\nbut here we're just restoring the original behaviour for one simple\ncase. The test in your patch just seemed a bit more hassle than it\nwas worth. I struggle to imagine how we'll break this again.\n\nDavid\n\n\n", "msg_date": "Tue, 25 Jan 2022 21:20:25 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "В Вт, 25/01/2022 в 21:20 +1300, David Rowley пишет:\n> On Tue, 25 Jan 2022 at 20:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Tue, 25 Jan 2022 at 17:35, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > And another attempt to fix tests volatility.\n> > \n> > FWIW, I had not really seen the point in adding a test for this. I\n> > did however see a point in it with your original patch. It seemed\n> > useful there to verify that Gather and GatherMerge did what we\n> > expected with 1 worker.\n> \n> I ended up pushing just the last patch I sent.\n> \n> The reason I didn't think it was worth adding a new test was that no\n> tests were added in the original commit. Existing tests did cover it,\n\nExisted tests didn't catched the issue. It is pitty fix is merged\nwithout test case it fixes.\n\n> but here we're just restoring the original behaviour for one simple\n> case. The test in your patch just seemed a bit more hassle than it\n> was worth. I struggle to imagine how we'll break this again.\n\nThank you for attention and for fix.\n\nregards,\nYura Sokolov.\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 12:29:07 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> В Вт, 25/01/2022 в 21:20 +1300, David Rowley пишет:\n>> The reason I didn't think it was worth adding a new test was that no\n>> tests were added in the original commit. Existing tests did cover it,\n\n> Existed tests didn't catched the issue. It is pitty fix is merged\n> without test case it fixes.\n\nI share David's skepticism about the value of a test case. The\nfailure mode that seems likely to me is some other code path making\nthe same mistake, which a predetermined test would not catch.\n\nTherefore, what I think could be useful is some very-late-stage\nassertion check (probably in createplan.c) verifying that the\nchild of a Gather is parallel-aware. Or maybe the condition\nneeds to be more general than that, but anyway the idea is for\nthe back end of the planner to verify that we didn't build a\nsilly plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jan 2022 11:32:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Wed, 26 Jan 2022 at 05:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Therefore, what I think could be useful is some very-late-stage\n> assertion check (probably in createplan.c) verifying that the\n> child of a Gather is parallel-aware. Or maybe the condition\n> needs to be more general than that, but anyway the idea is for\n> the back end of the planner to verify that we didn't build a\n> silly plan.\n\nYeah, it would be nice to have something like this. I think to do it,\nwe might need to invent some sort of path traversal function that can\ntake a custom callback function. The problem is that the parallel\naware path does not need to be directly below the gather/gathermerge.\n\nFor example (from select_distinct.out)\n\n Unique\n -> Sort\n Sort Key: four\n -> Gather\n Workers Planned: 2\n -> HashAggregate\n Group Key: four\n -> Parallel Seq Scan on tenk1\n\nFor this case, the custom callback would check that there's at least 1\nparallel_aware subpath below the Gather/GatherMerge.\n\nThere's probably some other rules that we could Assert are true. I\nthink any parallel_aware paths (unless they're scans) must contain\nonly parallel_aware subpaths. For example, parallel hash join must\nhave a parallel aware inner and outer.\n\nDavid\n\n\n", "msg_date": "Wed, 26 Jan 2022 10:30:59 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Wed, 26 Jan 2022 at 05:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Therefore, what I think could be useful is some very-late-stage\n> assertion check (probably in createplan.c) verifying that the\n> child of a Gather is parallel-aware. Or maybe the condition\n> needs to be more general than that, but anyway the idea is for\n> the back end of the planner to verify that we didn't build a\n> silly plan.\n\nI had a go at writing something along these lines, but I've ended up\nwith something I really don't like very much.\n\nI ended up having to write a recursive path traversal function. It's\ngeneric and it can be given a callback function to do whatever we like\nwith the Path. The problem is, that this seems like quite a bit of\ncode to maintain just for plan validation in Assert builds.\n\nCurrently, the patch validates 3 rules:\n\n1) Ensure a parallel_aware path has only parallel_aware or\nparallel_safe subpaths.\n2) Ensure Gather is either single_copy or contains at least one\nparallel_aware subnode.\n3) Ensure GatherMerge contains at least one parallel_aware subnode.\n\nI had to relax rule #1 a little as a Parallel Append can run subnodes\nthat are only parallel_safe and not parallel_aware. The problem with\nrelaxing this rule is that it does not catch the case that this bug\nreport was about. I could maybe tweak that so there's a special case\nfor Append to allow parallel aware or safe and ensure all other nodes\nhave only parallel_safe subnodes. I just don't really like that\nspecial case as it's likely to get broken/forgotten over time when we\nadd new nodes.\n\nI'm unsure if just being able to enforce rules #2 and #3 make this worthwhile.\n\nHappy to listen to other people's opinions and ideas on this. Without\nthose, I'm unlikely to try to push this any further.\n\nDavid", "msg_date": "Fri, 4 Feb 2022 13:07:42 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Thu, Feb 3, 2022 at 7:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Currently, the patch validates 3 rules:\n>\n> 1) Ensure a parallel_aware path has only parallel_aware or\n> parallel_safe subpaths.\n\nI think that every path that is parallel_aware must also be\nparallel_safe. So checking for either parallel_aware or parallel_safe\nshould be equivalent to just checking parallel_safe, unless I am\nconfused.\n\nI think the actual rule is: every path under a Gather or GatherMerge\nmust be parallel-safe.\n\nI don't think there's any real rule about what has to be under\nparallel-aware paths -- except that it would have to be all\nparallel-safe stuff, because the whole thing is under a Gather\n(Merge). There may seem to be such a rule, but I suspect it's just an\naccident of whatever code we have now rather than anything intrinsic.\n\n> 2) Ensure Gather is either single_copy or contains at least one\n> parallel_aware subnode.\n\nI agree that this one is a rule which we could check.\n\n> 3) Ensure GatherMerge contains at least one parallel_aware subnode.\n\nThis one, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 19:47:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Fri, 4 Feb 2022 at 13:48, Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the actual rule is: every path under a Gather or GatherMerge\n> must be parallel-safe.\n\nI've adjusted the patch so that it counts parallel_aware and\nparallel_safe Paths independently and verifies everything below a\nGather[Merge] is parallel_safe.\n\nThe diff stat currently looks like:\n\nsrc/backend/optimizer/plan/createplan.c | 230\n1 file changed, 230 insertions(+)\n\nI still feel this is quite a bit of code for what we're getting here.\nI'd be more for it if the path traversal function existed for some\nother reason and I was just adding the callback functions and Asserts.\n\nI'm keen to hear what others think about that.\n\nDavid", "msg_date": "Wed, 9 Feb 2022 10:10:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Tue, Feb 8, 2022 at 1:11 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thanks for having a look at this.\n>\n> On Fri, 4 Feb 2022 at 13:48, Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think the actual rule is: every path under a Gather or GatherMerge\n> > must be parallel-safe.\n>\n> I've adjusted the patch so that it counts parallel_aware and\n> parallel_safe Paths independently and verifies everything below a\n> Gather[Merge] is parallel_safe.\n>\n> The diff stat currently looks like:\n>\n> src/backend/optimizer/plan/createplan.c | 230\n> 1 file changed, 230 insertions(+)\n>\n> I still feel this is quite a bit of code for what we're getting here.\n> I'd be more for it if the path traversal function existed for some\n> other reason and I was just adding the callback functions and Asserts.\n>\n> I'm keen to hear what others think about that.\n>\n> David\n>\nHi,\n\n+ break;\n+ case T_MergeAppend:\n\nThe case for T_MergeAppend should be left indented.\n\n+ case T_Result:\n+ if (IsA(path, ProjectionPath))\n\nSince the remaining sub-cases don't have subpath, they are covered by the\nfinal `else` block - MinMaxAggPath and GroupResultPath don't need to be\nchecked.\n\nFor contains_a_parallel_aware_path(), it seems path_type_counter() can\nreturn bool indicating whether the walker should return early (when\nparallel aware count reaches 1).\n\nCheers\n\nOn Tue, Feb 8, 2022 at 1:11 PM David Rowley <dgrowleyml@gmail.com> wrote:Thanks for having a look at this.\n\nOn Fri, 4 Feb 2022 at 13:48, Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the actual rule is: every path under a Gather or GatherMerge\n> must be parallel-safe.\n\nI've adjusted the patch so that it counts parallel_aware and\nparallel_safe Paths independently and verifies everything below a\nGather[Merge] is parallel_safe.\n\nThe diff stat currently looks like:\n\nsrc/backend/optimizer/plan/createplan.c | 230\n1 file changed, 230 insertions(+)\n\nI still feel this is quite a bit of code for what we're getting here.\nI'd be more for it if the path traversal function existed for some\nother reason and I was just adding the callback functions and Asserts.\n\nI'm keen to hear what others think about that.\n\nDavidHi,+           break;+           case T_MergeAppend:The case for T_MergeAppend should be left indented.+       case T_Result:+           if (IsA(path, ProjectionPath))Since the remaining sub-cases don't have subpath, they are covered by the final `else` block - MinMaxAggPath and GroupResultPath don't need to be checked.For contains_a_parallel_aware_path(), it seems path_type_counter() can return bool indicating whether the walker should return early (when parallel aware count reaches 1).Cheers", "msg_date": "Tue, 8 Feb 2022 13:45:42 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" }, { "msg_contents": "On Tue, Feb 8, 2022 at 4:11 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I still feel this is quite a bit of code for what we're getting here.\n> I'd be more for it if the path traversal function existed for some\n> other reason and I was just adding the callback functions and Asserts.\n>\n> I'm keen to hear what others think about that.\n\nMy view is that functions like path_tree_walker are good things to\nhave on general principle. I find it likely that it will find other\nuses, and that if we don't add as part of this patch, someone will add\nit for some other reason in the future. So I would not really count\nthat in deciding how big this patch is, and the rest of what you have\nhere is pretty short and to the point.\n\nThere is the more difficult philosophical question of whether it's\nworth expending any code on this at all. I think it is pretty clear\nthat this has positive value: it could easily prevent >0 future bugs,\nwhich IMHO is not bad for such a small patch. However, it does feel a\nlittle bit primitive somehow, in the sense that there are a lot of\nthings you could do wrong which this wouldn't catch. For example, a\nGather with no parallel-aware node under it is probably busted, unless\nsomeone invents new kinds of parallel operators that work differently\nfrom what we have now. But a join beneath a Gather that is not itself\nparallel-aware should have a parallel-aware node under exactly one\nside of the join. If there's a parallel scan on both sides or neither\nside, even with stuff on top of it, that's wrong. But a parallel-aware\njoin could do something else, e.g. Parallel Hash Join expects a\nparallel path on both sides. Some other parallel-aware join type could\nexpect a parallel path on exactly one side without caring which one,\nor on one specific side, or maybe even on neither side.\n\nWhat we're really reasoning about here is whether the input is going\nto be partitioned across multiple executions of the plan in a proper\nway. A Gather is going to run the same plan in all of its workers, so\nit wants a subplan that when run in all workers will together produce\nall output rows. Parallel-aware scans partition the results across\nworkers, so they behave that way. A non-parallel aware join will work\nthat way if it joins a partition the input on one side to all of the\ninput from the other side, hence the rule I describe above. For\naggregates, you can't safely apply a plain old Aggregate operation\neither to a regular scan or to a parallel-aware scan and get the right\nanswer, which is why we need Partial and Finalize stages for parallel\nquery. But for a lot of other nodes, like Materialize, their output\nwill have the same properties as the input: if the subplan of a\nMaterialize node produces all the rows on each execution, the\nMaterialize node will too; if it produces a partition of the output\nrows each time it's executed, once per worker, the Materialize node\nwill do the same. And I think it's that kind of case that leads to the\ncheck we have here, that there ought to be a parallel-aware node in\nthere someplace.\n\nIt might be the case that there's some more sophisticated check we\ncould be doing here that would be more satisfying than the one you've\nwritten, but I'm not sure. Such a check might end up needing to know\nthe behavior of the existing nodes in a lot of detail, which then\nwouldn't help with finding bugs in new functionality we add in the\nfuture. In that sense, the kind of simple check you've got here has\nsomething to recommend it: it won't catch everything people can do\nwrong, but when it does trip, chances are good it's found a bug, and\nit's got a good chance of continuing to work as well as it does today\neven in the face of future additions. So I guess I'm mildly in favor\nof it, but I would also find it entirely reasonable if you were to\ndecide it's not quite worth it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 16:54:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix BUG #17335: Duplicate result rows in Gather node" } ]
[ { "msg_contents": "Hi, hackers!\n\nLong time wraparound was a really big pain for highly loaded systems. One\nsource of performance degradation is the need to vacuum before every\nwraparound.\nAnd there were several proposals to make XIDs 64-bit like [1], [2], [3] and\n[4] to name a few.\n\nThe approach [2] seems to have stalled on CF since 2018. But meanwhile it\nwas successfully being used in our Postgres Pro fork all time since then.\nWe have hundreds of customers using 64-bit XIDs. Dozens of instances are\nunder load that require wraparound each 1-5 days with 32-bit XIDs.\nIt really helps the customers with a huge transaction load that in the case\nof 32-bit XIDs could experience wraparounds every day. So I'd like to\npropose this approach modification to CF.\n\nPFA updated working patch v6 for PG15 development cycle.\nIt is based on a patch by Alexander Korotkov version 5 [5] with a few\nfixes, refactoring and was rebased to PG15.\n\nMain changes:\n- Change TransactionId to 64bit\n- Disk tuple format (HeapTupleHeader) is unchanged: xmin and xmax\n remains 32bit\n-- 32bit xid is named ShortTransactionId now.\n-- Exception: see \"double xmax\" format below.\n- Heap page format is changed to contain xid and multixact base value,\n tuple's xmin and xmax are offsets from.\n-- xid_base and multi_base are stored as a page special data. PageHeader\n remains unmodified.\n-- If after upgrade page has no free space for special data, tuples are\n converted to \"double xmax\" format: xmin became virtual\n FrozenTransactionId, xmax occupies the whole 64bit.\n Page converted to new format when vacuum frees enough space.\n- In-memory tuples (HeapTuple) were enriched with copies of xid_base and\n multi_base from a page.\n\nToDo:\n- replace xid_base/multi_base in HeapTuple with precalculated 64bit\n xmin/xmax.\n- attempt to split the patch into \"in-memory\" part (add xmin/xmax to\n HeapTuple) and \"storage\" format change.\n- try to implement the storage part as a table access method.\n\nYour opinions are very much welcome!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/1611355191319-0.post%40n3.nabble.com#c884ac33243ded0a47881137c6c96f6b\n[2]\nhttps://www.postgresql.org/message-id/flat/DA1E65A4-7C5A-461D-B211-2AD5F9A6F2FD%40gmail.com\n[3]\nhttps://www.postgresql.org/message-id/flat/CAPpHfduQ7KuCHvg3dHx%2B9Pwp_rNf705bjdRCrR_Cqiv_co4H9A%40mail.gmail.com\n[4]\nhttps://www.postgresql.org/message-id/flat/51957591572599112%40myt5-3a82a06244de.qloud-c.yandex.net\n[5]\nhttps://www.postgresql.org/message-id/CAPpHfdseWf0QLWMAhLgiyP4u%2B5WUondzdQ_Yd-eeF%3DDuj%3DVq0g%40mail.gmail.com", "msg_date": "Thu, 30 Dec 2021 15:15:16 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Maxim,\r\n\r\n I’m glad to see that you’re trying to carry the 64-bit XID work forward. I had not noticed that my earlier patch (also derived from Alexander Kortkov’s patch) was responded to back in September. Perhaps we can merge some of the code cleanup that it contained, such as using XID_FMT everywhere and creating a type for the kind of page returned by TransactionIdToPage() to make the code cleaner.\r\n\r\n Is your patch functionally the same as the PostgresPro implementation? If so, I think it would be helpful for everyone’s understanding to read the PostgresPro documentation on VACUUM. See in particular section “Forced shrinking pg_clog and pg_multixact”\r\n\r\n https://postgrespro.com/docs/enterprise/9.6/routine-vacuuming#vacuum-for-wraparound\r\n\r\nbest regards,\r\n\r\n /Jim\r\n\n\n\n\n\n\n\n\n\nHi Maxim,\n \n    I’m glad to see that you’re trying to carry the 64-bit XID work forward.  I had not noticed that my earlier patch (also derived from Alexander Kortkov’s patch) was responded to back in September.  Perhaps we can merge some of the code\r\n cleanup that it contained, such as using XID_FMT everywhere and creating a type for the kind of page returned by TransactionIdToPage() to make the code cleaner.\n \n    Is your patch functionally the same as the PostgresPro implementation?  If so, I think it would be helpful for everyone’s understanding to read the PostgresPro documentation on VACUUM.  See in particular section “Forced shrinking pg_clog\r\n and pg_multixact”\n \n    https://postgrespro.com/docs/enterprise/9.6/routine-vacuuming#vacuum-for-wraparound\n\n\n \nbest regards,\n \n    /Jim", "msg_date": "Tue, 4 Jan 2022 17:49:07 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Greetings,\n\n* Maxim Orlov (orlovmg@gmail.com) wrote:\n> Long time wraparound was a really big pain for highly loaded systems. One\n> source of performance degradation is the need to vacuum before every\n> wraparound.\n> And there were several proposals to make XIDs 64-bit like [1], [2], [3] and\n> [4] to name a few.\n> \n> The approach [2] seems to have stalled on CF since 2018. But meanwhile it\n> was successfully being used in our Postgres Pro fork all time since then.\n> We have hundreds of customers using 64-bit XIDs. Dozens of instances are\n> under load that require wraparound each 1-5 days with 32-bit XIDs.\n> It really helps the customers with a huge transaction load that in the case\n> of 32-bit XIDs could experience wraparounds every day. So I'd like to\n> propose this approach modification to CF.\n> \n> PFA updated working patch v6 for PG15 development cycle.\n> It is based on a patch by Alexander Korotkov version 5 [5] with a few\n> fixes, refactoring and was rebased to PG15.\n\nJust to confirm as I only did a quick look- if a transaction in such a\nhigh rate system lasts for more than a day (which certainly isn't\ncompletely out of the question, I've had week-long transactions\nbefore..), and something tries to delete a tuple which has tuples on it\nthat can't be frozen yet due to the long-running transaction- it's just\ngoing to fail?\n\nNot saying that I've got any idea how to fix that case offhand, and we\ndon't really support such a thing today as the server would just stop\ninstead, but if I saw something in the release notes talking about PG\nmoving to 64bit transaction IDs, I'd probably be pretty surprised to\ndiscover that there's still a 32bit limit that you have to watch out for\nor your system will just start failing transactions. Perhaps that's a\nworthwhile tradeoff for being able to generally avoid having to vacuum\nand deal with transaction wrap-around, but I have to wonder if there\nmight be a better answer. Of course, also wonder about how we're going\nto document and monitor for this potential issue and what kind of\ncorrective action will be needed (kill transactions older than a cerain\namount of transactions..?).\n\nThanks,\n\nStephen", "msg_date": "Tue, 4 Jan 2022 14:32:20 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "\r\n\r\nOn 1/4/22, 2:35 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n\r\n>>\r\n>> Not saying that I've got any idea how to fix that case offhand, and we\r\n>> don't really support such a thing today as the server would just stop\r\n>> instead, ...\r\n>> Perhaps that's a\r\n>> worthwhile tradeoff for being able to generally avoid having to vacuum\r\n>> and deal with transaction wrap-around, but I have to wonder if there\r\n>> might be a better answer. \r\n>>\r\n\r\nFor the target use cases that PostgreSQL is designed for, it's a very worthwhile tradeoff in my opinion. Such long-running transactions need to be killed.\r\n\r\nRe: -- If after upgrade page has no free space for special data, tuples are\r\n converted to \"double xmax\" format: xmin became virtual\r\n FrozenTransactionId, xmax occupies the whole 64bit.\r\n Page converted to new format when vacuum frees enough space.\r\n\r\nI'm concerned about the maintainability impact of having 2 new on-disk page formats. It's already complex enough with XIDs and multixact-XIDs.\r\n\r\nIf the lack of space for the two epochs in the special data area is a problem only in an upgrade scenario, why not resolve the problem before completing the upgrade process like a kind of post-process pg_repack operation that converts all \"double xmax\" pages to the \"double-epoch\" page format? i.e. maybe the \"double xmax\" representation is needed as an intermediate representation during upgrade, but after upgrade completes successfully there are no pages with the \"double-xmax\" representation. This would eliminate a whole class of coding errors and would make the code dealing with 64-bit XIDs simpler and more maintainable.\r\n\r\n /Jim\r\n\r\n", "msg_date": "Tue, 4 Jan 2022 22:22:50 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "\n\nOn 2021/12/30 21:15, Maxim Orlov wrote:\n> Hi, hackers!\n> \n> Long time wraparound was a really big pain for highly loaded systems. One source of performance degradation is the need to vacuum before every wraparound.\n> And there were several proposals to make XIDs 64-bit like [1], [2], [3] and [4] to name a few.\n> \n> The approach [2] seems to have stalled on CF since 2018. But meanwhile it was successfully being used in our Postgres Pro fork all time since then. We have hundreds of customers using 64-bit XIDs. Dozens of instances are under load that require wraparound each 1-5 days with 32-bit XIDs.\n> It really helps the customers with a huge transaction load that in the case of 32-bit XIDs could experience wraparounds every day. So I'd like to propose this approach modification to CF.\n> \n> PFA updated working patch v6 for PG15 development cycle.\n> It is based on a patch by Alexander Korotkov version 5 [5] with a few fixes, refactoring and was rebased to PG15.\n\nThanks a lot! I'm really happy to see this proposal again!!\n\nIs there any documentation or README explaining this whole 64-bit XID mechanism?\n\nCould you tell me what happens if new tuple with XID larger than xid_base + 0xFFFFFFFF is inserted into the page? Such new tuple is not allowed to be inserted into that page? Or xid_base and xids of all existing tuples in the page are increased? Also what happens if one of those xids (of existing tuples) cannot be changed because the tuple still can be seen by very-long-running transaction?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 5 Jan 2022 14:40:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, 30 Dec 2021 at 13:19, Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi, hackers!\n>\n> Long time wraparound was a really big pain for highly loaded systems. One source of performance degradation is the need to vacuum before every wraparound.\n> And there were several proposals to make XIDs 64-bit like [1], [2], [3] and [4] to name a few.\n\nVery good to see this revived.\n\n> PFA updated working patch v6 for PG15 development cycle.\n> It is based on a patch by Alexander Korotkov version 5 [5] with a few fixes, refactoring and was rebased to PG15.\n>\n> Main changes:\n> - Change TransactionId to 64bit\n\nThis sounds like a good route to me.\n\n> - Disk tuple format (HeapTupleHeader) is unchanged: xmin and xmax\n> remains 32bit\n> -- 32bit xid is named ShortTransactionId now.\n> -- Exception: see \"double xmax\" format below.\n> - Heap page format is changed to contain xid and multixact base value,\n> tuple's xmin and xmax are offsets from.\n> -- xid_base and multi_base are stored as a page special data. PageHeader\n> remains unmodified.\n> -- If after upgrade page has no free space for special data, tuples are\n> converted to \"double xmax\" format: xmin became virtual\n> FrozenTransactionId, xmax occupies the whole 64bit.\n> Page converted to new format when vacuum frees enough space.\n> - In-memory tuples (HeapTuple) were enriched with copies of xid_base and\n> multi_base from a page.\n\nI think we need more Overview of Behavior than is available with this\npatch, perhaps in the form of a README, such as in\nsrc/backend/access/heap/README.HOT.\n\nMost people's comments are about what the opportunities and problems\ncaused, and mine are definitely there also. i.e. explain the user\nvisible behavior.\nPlease explain the various new states that pages can be in and what\nthe effects are,\n\nMy understanding is this would be backwards compatible, so we can\nupgrade to it. Please confirm.\n\nThanks\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 5 Jan 2022 18:00:05 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Jan 4, 2022 at 10:22:50PM +0000, Finnerty, Jim wrote:\n> I'm concerned about the maintainability impact of having 2 new\n> on-disk page formats. It's already complex enough with XIDs and\n> multixact-XIDs.\n>\n> If the lack of space for the two epochs in the special data area is\n> a problem only in an upgrade scenario, why not resolve the problem\n> before completing the upgrade process like a kind of post-process\n> pg_repack operation that converts all \"double xmax\" pages to\n> the \"double-epoch\" page format? i.e. maybe the \"double xmax\"\n> representation is needed as an intermediate representation during\n> upgrade, but after upgrade completes successfully there are no pages\n> with the \"double-xmax\" representation. This would eliminate a whole\n> class of coding errors and would make the code dealing with 64-bit\n> XIDs simpler and more maintainable.\n\nWell, yes, we could do this, and it would avoid the complexity of having\nto support two XID representations, but we would need to accept that\nfast pg_upgrade would be impossible in such cases, since every page\nwould need to be checked and potentially updated.\n\nYou might try to do this while the server is first started and running\nqueries, but I think we found out from the online checkpoint patch that\nhaving the server in an intermediate state while running queries is very\ncomplex --- it might be simpler to just accept two XID formats all the\ntime than enabling the server to run with two formats for a short\nperiod. My big point is that this needs more thought.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 18:51:37 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, Dec 30, 2021 at 03:15:16PM +0300, Maxim Orlov wrote:\n> PFA updated working patch v6 for PG15 development cycle.\n> It is based on a patch by Alexander Korotkov version 5 [5] with a few fixes,\n> refactoring and was rebased to PG15.\n> \n> Main changes:\n> - Change TransactionId to 64bit\n> - Disk tuple format (HeapTupleHeader) is unchanged: xmin and xmax\n>   remains 32bit\n> -- 32bit xid is named ShortTransactionId now.\n> -- Exception: see \"double xmax\" format below.\n> - Heap page format is changed to contain xid and multixact base value,\n>   tuple's xmin and xmax are offsets from.\n> -- xid_base and multi_base are stored as a page special data. PageHeader\n>    remains unmodified.\n> -- If after upgrade page has no free space for special data, tuples are\n>    converted to \"double xmax\" format: xmin became virtual\n>    FrozenTransactionId, xmax occupies the whole 64bit.\n>    Page converted to new format when vacuum frees enough space.\n\nI think it is a great idea to allow the 64-XID to span the 32-bit xmin\nand xmax fields when needed. It would be nice if we can get focus on\nthis feature so we are sure it gets into PG 15. Can we add this patch\nincrementally so people can more easily analyze it?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 19:02:16 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Jan 4, 2022 at 05:49:07PM +0000, Finnerty, Jim wrote:\n> Hi Maxim,\n> I’m glad to see that you’re trying to carry the 64-bit XID work forward. I\n> had not noticed that my earlier patch (also derived from Alexander Kortkov’s\n> patch) was responded to back in September. Perhaps we can merge some of the\n> code cleanup that it contained, such as using XID_FMT everywhere and creating a\n> type for the kind of page returned by TransactionIdToPage() to make the code\n> cleaner.\n> \n> Is your patch functionally the same as the PostgresPro implementation? If\n> so, I think it would be helpful for everyone’s understanding to read the\n> PostgresPro documentation on VACUUM. See in particular section “Forced\n> shrinking pg_clog and pg_multixact”\n>\n> https://postgrespro.com/docs/enterprise/9.6/routine-vacuuming#\n> vacuum-for-wraparound\n\nGood point --- we still need vacuum freeze. It would be good to\nunderstand how much value we get in allowing vacuum freeze to be done\nless often --- how big can pg_xact/pg_multixact get before they are\nproblems?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 19:03:55 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Jan 05, 2022 at 06:51:37PM -0500, Bruce Momjian wrote:\n> On Tue, Jan 4, 2022 at 10:22:50PM +0000, Finnerty, Jim wrote:\n> > I'm concerned about the maintainability impact of having 2 new\n> > on-disk page formats. It's already complex enough with XIDs and\n> > multixact-XIDs.\n> >\n> > If the lack of space for the two epochs in the special data area is\n> > a problem only in an upgrade scenario, why not resolve the problem\n> > before completing the upgrade process like a kind of post-process\n> > pg_repack operation that converts all \"double xmax\" pages to\n> > the \"double-epoch\" page format? i.e. maybe the \"double xmax\"\n> > representation is needed as an intermediate representation during\n> > upgrade, but after upgrade completes successfully there are no pages\n> > with the \"double-xmax\" representation. This would eliminate a whole\n> > class of coding errors and would make the code dealing with 64-bit\n> > XIDs simpler and more maintainable.\n> \n> Well, yes, we could do this, and it would avoid the complexity of having\n> to support two XID representations, but we would need to accept that\n> fast pg_upgrade would be impossible in such cases, since every page\n> would need to be checked and potentially updated.\n> \n> You might try to do this while the server is first started and running\n> queries, but I think we found out from the online checkpoint patch that\n\nI think you meant the online checksum patch. Which this reminded me of, too.\n\nhttps://commitfest.postgresql.org/31/2611/\n\n> having the server in an intermediate state while running queries is very\n> complex --- it might be simpler to just accept two XID formats all the\n> time than enabling the server to run with two formats for a short\n> period. My big point is that this needs more thought.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 5 Jan 2022 18:12:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Jan 5, 2022 at 06:12:26PM -0600, Justin Pryzby wrote:\n> On Wed, Jan 05, 2022 at 06:51:37PM -0500, Bruce Momjian wrote:\n> > On Tue, Jan 4, 2022 at 10:22:50PM +0000, Finnerty, Jim wrote:\n> > > I'm concerned about the maintainability impact of having 2 new\n> > > on-disk page formats. It's already complex enough with XIDs and\n> > > multixact-XIDs.\n> > >\n> > > If the lack of space for the two epochs in the special data area is\n> > > a problem only in an upgrade scenario, why not resolve the problem\n> > > before completing the upgrade process like a kind of post-process\n> > > pg_repack operation that converts all \"double xmax\" pages to\n> > > the \"double-epoch\" page format? i.e. maybe the \"double xmax\"\n> > > representation is needed as an intermediate representation during\n> > > upgrade, but after upgrade completes successfully there are no pages\n> > > with the \"double-xmax\" representation. This would eliminate a whole\n> > > class of coding errors and would make the code dealing with 64-bit\n> > > XIDs simpler and more maintainable.\n> > \n> > Well, yes, we could do this, and it would avoid the complexity of having\n> > to support two XID representations, but we would need to accept that\n> > fast pg_upgrade would be impossible in such cases, since every page\n> > would need to be checked and potentially updated.\n> > \n> > You might try to do this while the server is first started and running\n> > queries, but I think we found out from the online checkpoint patch that\n> \n> I think you meant the online checksum patch. Which this reminded me of, too.\n> \n> https://commitfest.postgresql.org/31/2611/\n\nSorry, yes, I have checkpoint on my mind. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 5 Jan 2022 19:43:29 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nOn Thu, Jan 6, 2022 at 3:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Dec 30, 2021 at 03:15:16PM +0300, Maxim Orlov wrote:\n> > PFA updated working patch v6 for PG15 development cycle.\n> > It is based on a patch by Alexander Korotkov version 5 [5] with a few fixes,\n> > refactoring and was rebased to PG15.\n> >\n> > Main changes:\n> > - Change TransactionId to 64bit\n> > - Disk tuple format (HeapTupleHeader) is unchanged: xmin and xmax\n> > remains 32bit\n> > -- 32bit xid is named ShortTransactionId now.\n> > -- Exception: see \"double xmax\" format below.\n> > - Heap page format is changed to contain xid and multixact base value,\n> > tuple's xmin and xmax are offsets from.\n> > -- xid_base and multi_base are stored as a page special data. PageHeader\n> > remains unmodified.\n> > -- If after upgrade page has no free space for special data, tuples are\n> > converted to \"double xmax\" format: xmin became virtual\n> > FrozenTransactionId, xmax occupies the whole 64bit.\n> > Page converted to new format when vacuum frees enough space.\n>\n> I think it is a great idea to allow the 64-XID to span the 32-bit xmin\n> and xmax fields when needed. It would be nice if we can get focus on\n> this feature so we are sure it gets into PG 15. Can we add this patch\n> incrementally so people can more easily analyze it?\n\nI see at least the following major issues/questions in this patch.\n1) Current code relies on the fact that TransactionId can be\natomically read from/written to shared memory. With 32-bit systems\nand 64-bit TransactionId, that's not true anymore. Therefore, the\npatch has concurrency issues on 32-bit systems. We need to carefully\nreview these issues and provide a fallback for 32-bit systems. I\nsuppose nobody is thinking about dropping off 32-bit systems, right?\nAlso, I wonder how critical for us is the overhead for 32-bit systems.\nThey are going to become legacy, so overhead isn't so critical, right?\n2) With this patch we still need to freeze to cut SLRUs. This is\nespecially problematic with Multixacts, because systems heavily using\nrow-level locks can consume an enormous amount of multixacts. That is\nespecially problematic when we have 2x bigger multixacts. We probably\ncan provide an alternative implementation for multixact vacuum, which\ndoesn't require scanning all the heaps. That is a pretty amount of\nwork though. The clog is much smaller and we can cut it more rarely.\nPerhaps, we could tolerate freezing to cut clog, couldn't we?\n3) 2x bigger in-memory representation of TransactionId have overheads.\nIn particular, it could mitigate the effect of recent advancements\nfrom Andres Freund. I'm not exactly sure we should/can do something\nwith this. But I think this should be at least carefully analyzed.\n4) SP-GiST index stores TransactionId on pages. Could we tolerate\ndropping SP-GiST indexes on a major upgrade? Or do we have to invent\nsomething smarter?\n5) 32-bit limitation within the page mentioned upthread by Stephen\nFrost should be also carefully considered. Ideally, we should get rid\nof it, but I don't have particular ideas in this field for now. At\nleast, we should make sure we did our best at error reporting and\nmonitoring capabilities.\n\nI think the realistic goal for PG 15 development cycle would be\nagreement on a roadmap for all the items above (and probably some\ninitial implementations).\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 6 Jan 2022 05:53:22 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, 30 Dec 2021 at 13:19, Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Your opinions are very much welcome!\n\nThis is a review of the Int64 options patch,\n\"v6-0001-Add-64-bit-GUCs-for-xids.patch\"\n\nApplies cleanly, with some fuzz, compiles cleanly and passes make check.\nPatch eyeballs OK, no obvious defects.\nTested using the attached test, so seems to work correctly.\nOn review of docs, no additions or changes required.\nPerhaps add something to README? If so, minor doc patch attached.\n\nOtherwise, this sub-patch is READY FOR COMMITTER.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Thu, 6 Jan 2022 10:24:09 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "(Maxim) Re: -- If after upgrade page has no free space for special data, tuples are\r\n converted to \"double xmax\" format: xmin became virtual\r\n FrozenTransactionId, xmax occupies the whole 64bit.\r\n Page converted to new format when vacuum frees enough space.\r\n\r\nA better way would be to prepare the database for conversion to the 64-bit XID format before the upgrade so that it ensures that every page has enough room for the two new epochs (E bits).\r\n\r\n1. Enforce the rule that no INSERT or UPDATE to an existing page will leave less than E bits of free space on a heap page\r\n\r\n2. Run an online and restartable task, analogous to pg_repack, that rewrites and splits any page that has less than E bits of free space. This needs to be run on all non-temp tables in all schemas in all databases. DDL operations are not allowed on a target table while this operation runs, which is enforced by taking an ACCESS SHARE lock on each table while the process is running. To mitigate the effects of this restriction, the restartable task can be restricted to run only in certain hours. This could be implemented as a background maintenance task that runs for X hours as of a certain time of day and then kicks itself off again in 24-X hours, logging its progress.\r\n\r\nWhen this task completes, the database is ready for upgrade to 64-bit XIDs, and there is no possibility that any page has insufficient free space for the special data.\r\n\r\nWould you agree that this approach would completely eliminate the need for a \"double xmax\" representation? \r\n\r\n /Jim\r\n\r\n\r\n", "msg_date": "Thu, 6 Jan 2022 13:15:19 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, 6 Jan 2022 at 13:15, Finnerty, Jim <jfinnert@amazon.com> wrote:\n>\n> (Maxim) Re: -- If after upgrade page has no free space for special data, tuples are\n> converted to \"double xmax\" format: xmin became virtual\n> FrozenTransactionId, xmax occupies the whole 64bit.\n> Page converted to new format when vacuum frees enough space.\n>\n> A better way would be to prepare the database for conversion to the 64-bit XID format before the upgrade so that it ensures that every page has enough room for the two new epochs (E bits).\n\nMost software has a one-stage upgrade model. What you propose would\nhave us install 2 things, with a step in-between, which makes it\nharder to manage.\n\n> 1. Enforce the rule that no INSERT or UPDATE to an existing page will leave less than E bits of free space on a heap page\n>\n> 2. Run an online and restartable task, analogous to pg_repack, that rewrites and splits any page that has less than E bits of free space. This needs to be run on all non-temp tables in all schemas in all databases. DDL operations are not allowed on a target table while this operation runs, which is enforced by taking an ACCESS SHARE lock on each table while the process is running. To mitigate the effects of this restriction, the restartable task can be restricted to run only in certain hours. This could be implemented as a background maintenance task that runs for X hours as of a certain time of day and then kicks itself off again in 24-X hours, logging its progress.\n>\n> When this task completes, the database is ready for upgrade to 64-bit XIDs, and there is no possibility that any page has insufficient free space for the special data.\n>\n> Would you agree that this approach would completely eliminate the need for a \"double xmax\" representation?\n\nI agree about the idea for scanning existing data blocks, but why not\ndo this AFTER upgrade?\n\n1. Upgrade, with important aspect not-enabled-yet, but everything else\nworking - all required software is delivered in one shot, with fast\nupgrade\n2. As each table is VACUUMed, we confirm/clean/groom data blocks so\neach table is individually confirmed as being ready. The pace that\nthis happens at is under user control.\n3. When all tables have been prepared, then restart to allow xid64 format usage\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 6 Jan 2022 14:09:35 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Re: Most software has a one-stage upgrade model. What you propose would\r\n have us install 2 things, with a step in-between, which makes it\r\n harder to manage.\r\n\r\nThe intended benefit would be that the code doesn't need to handle the possibility of 2 different XID representations for the indefinite future. \r\n\r\nI agree that VACUUM would be the preferred tool to make room for the special data area so that there is no need to install a separate tool, though, whether this work happens before or after the upgrade. \r\n\r\nRe: 1. Upgrade, with important aspect not-enabled-yet, but everything else working - all required software is delivered in one shot, with fast upgrade\r\n\r\nLet's clarify what happens during upgrade. What format are the pages in immediately after the upgrade? \r\n\r\n 2. As each table is VACUUMed, we confirm/clean/groom data blocks so\r\n each table is individually confirmed as being ready. The pace that\r\n this happens at is under user control.\r\n\r\nWhat are VACUUM's new responsibilities in this phase? VACUUM needs a new task that confirms when there exists no heap page for a table that is not ready.\r\n\r\nIf upgrade put all the pages into either double-xmax or double-epoch representation, then VACUUM's responsibility could be to split the double-xmax pages into the double-epoch representation and verify when there exists no double-xmax pages.\r\n\r\n 3. When all tables have been prepared, then restart to allow xid64 format usage\r\n\r\nLet's also clarify what happens at restart time.\r\n\r\nIf we were to do the upgrade before preparing space in advance, is there a way to ever remove the code that knows about the double-xmax XID representation?\r\n\r\n\r\n\r\n", "msg_date": "Thu, 6 Jan 2022 16:20:10 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Jan 4, 2022 at 9:40 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Could you tell me what happens if new tuple with XID larger than xid_base + 0xFFFFFFFF is inserted into the page? Such new tuple is not allowed to be inserted into that page?\n\nI fear that this patch will have many bugs along these lines. Example:\nWhy is it okay that convert_page() may have to defragment a heap page,\nwithout holding a cleanup lock? That will subtly break code that holds\na pin on the buffer, when a tuple slot contains a C pointer to a\nHeapTuple in shared memory (though only if we get unlucky).\n\nCurrently we are very permissive about what XID/backend can (or\ncannot) consume a specific piece of free space from a specific heap\npage in code like RelationGetBufferForTuple(). It would be very hard\nto enforce a policy like \"your XID cannot insert onto this particular\nheap page\" with the current FSM design. (I actually think that we\nshould have a FSM that supports these requirements, but that's a big\nproject -- a long term goal of mine [1].)\n\n> Or xid_base and xids of all existing tuples in the page are increased? Also what happens if one of those xids (of existing tuples) cannot be changed because the tuple still can be seen by very-long-running transaction?\n\nThat doesn't work in the general case, I think. How could it, unless\nwe truly had 64-bit XIDs in heap tuple headers? You can't necessarily\nfreeze to fix the problem, because we can only freeze XIDs that 1.)\ncommitted, and 2.) are visible to every possible MVCC snapshot. (I\nthink you were alluding to this problem yourself.)\n\nI believe that a good solution to the problem that this patch tries to\nsolve needs to be more ambitious. I think that we need to return to\nfirst principles, rather than extending what we have already.\nCurrently, we store XIDs in tuple headers so that we can determine the\ntuple's visibility status, based on whether the XID committed (or\naborted), and where our snapshot sees the XID as \"in the past\" (in the\ncase of an inserted tuple's xmin). You could say that XIDs from tuple\nheaders exist so we can see differences *between* tuples. But these\ndifferences are typically not useful/interesting for very long. 32-bit\nXIDs are sometimes not wide enough, but usually they're \"too wide\":\nWhy should we need to consider an old XID (e.g. do clog lookups) at\nall, barring extreme cases?\n\nWhy do we need to keep any kind of metadata about transactions around\nfor a long time? Postgres has not supported time travel in 25 years!\n\nIf we eagerly cleaned-up aborted transactions with a special kind of\nVACUUM (which would remove aborted XIDs), we could also maintain a\nstructure that indicates if all of the XIDs on a heap page are known\n\"all committed\" implicitly (no dirtying the page, no hint bits, etc)\n-- something a little like the visibility map, that is mostly set\nimplicitly (not during VACUUM). That doesn't fix the wraparound\nproblem itself, of course. But it enables a design that imposes the\nsame problem on the specific old snapshot instead -- something like a\n\"snapshot too old\" error is much better than a system-wide wraparound\nfailure. That approach is definitely very hard, and also requires a\nsmart FSM along the lines described in [1], but it seems like the best\nway forward.\n\nAs I pointed out already, freezing is bad because it imposes the\nrequirement that everybody considers an affected XID committed and\nvisible, which is brittle (e.g., old snapshots can cause wraparound\nfailure). More generally, we rely too much on explicitly maintaining\n\"absolute\" metadata inline, when we should implicitly maintain\n\"relative\" metadata (that can be discarded quickly and without concern\nfor old snapshots). We need to be more disciplined about what XIDs can\nmodify what heap pages in the first place (in code like hio.c and the\nFSM) to make all this work.\n\n[1] https://www.postgresql.org/message-id/CAH2-Wz%3DzEV4y_wxh-A_EvKxeAoCMdquYMHABEh_kZO1rk3a-gw%40mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 Jan 2022 12:44:52 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ",On Thu, Jan 6, 2022 at 4:15 PM Finnerty, Jim <jfinnert@amazon.com> wrote:\n> (Maxim) Re: -- If after upgrade page has no free space for special data, tuples are\n> converted to \"double xmax\" format: xmin became virtual\n> FrozenTransactionId, xmax occupies the whole 64bit.\n> Page converted to new format when vacuum frees enough space.\n>\n> A better way would be to prepare the database for conversion to the 64-bit XID format before the upgrade so that it ensures that every page has enough room for the two new epochs (E bits).\n>\n> 1. Enforce the rule that no INSERT or UPDATE to an existing page will leave less than E bits of free space on a heap page\n>\n> 2. Run an online and restartable task, analogous to pg_repack, that rewrites and splits any page that has less than E bits of free space. This needs to be run on all non-temp tables in all schemas in all databases. DDL operations are not allowed on a target table while this operation runs, which is enforced by taking an ACCESS SHARE lock on each table while the process is running. To mitigate the effects of this restriction, the restartable task can be restricted to run only in certain hours. This could be implemented as a background maintenance task that runs for X hours as of a certain time of day and then kicks itself off again in 24-X hours, logging its progress.\n>\n> When this task completes, the database is ready for upgrade to 64-bit XIDs, and there is no possibility that any page has insufficient free space for the special data.\n>\n> Would you agree that this approach would completely eliminate the need for a \"double xmax\" representation?\n\nThe \"prepare\" approach was the first tried.\nhttps://github.com/postgrespro/pg_pageprep\nBut it appears to be very difficult and unreliable. After investing\nmany months into pg_pageprep, \"double xmax\" approach appears to be\nvery fast to implement and reliable.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 7 Jan 2022 06:35:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "\n\nOn 2022/01/06 19:24, Simon Riggs wrote:\n> On Thu, 30 Dec 2021 at 13:19, Maxim Orlov <orlovmg@gmail.com> wrote:\n> \n>> Your opinions are very much welcome!\n> \n> This is a review of the Int64 options patch,\n> \"v6-0001-Add-64-bit-GUCs-for-xids.patch\"\n\nDo we really need to support both int32 and int64 options? Isn't it enough to replace the existing int32 option with int64 one? Or how about using string-type option for very large number like 64-bit XID, like it's done for recovery_target_xid?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 7 Jan 2022 14:18:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Re: clog page numbers, as returned by TransactionIdToPage\r\n\r\n-\tint\t\t\tpageno = TransactionIdToPage(xid);\t/* get page of parent */\r\n+\tint64\t\tpageno = TransactionIdToPage(xid);\t/* get page of parent */\r\n\r\n...\r\n\r\n-\tint\t\t\tpageno = TransactionIdToPage(subxids[0]);\r\n+\tint64\t\tpageno = TransactionIdToPage(subxids[0]);\r\n \tint\t\t\toffset = 0;\r\n \tint\t\t\ti = 0;\r\n \r\n...\r\n\r\n-\t\tint\t\t\tnextpageno;\r\n+\t\tint64\t\tnextpageno;\r\n\r\nEtc.\r\n\r\nIn all those places where you are replacing int with int64 for the kind of values returned by TransactionIdToPage(), would you mind replacing the int64's with a type name, such as ClogPageNumber, for improved code maintainability?\r\n\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 15:39:16 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Re: The \"prepare\" approach was the first tried.\r\n https://github.com/postgrespro/pg_pageprep\r\n But it appears to be very difficult and unreliable. After investing\r\n many months into pg_pageprep, \"double xmax\" approach appears to be\r\n very fast to implement and reliable.\r\n\r\nI'd still like a plan to retire the \"double xmax\" representation eventually. Previously I suggested that this could be done as a post-process, before upgrade is complete, but that could potentially make upgrade very slow. \r\n\r\nAnother way to retire the \"double xmax\" representation eventually could be to disallow \"double xmax\" pages in subsequent major version upgrades (e.g. to PG16, if \"double xmax\" pages are introduced in PG15). This gives the luxury of time after a fast upgrade to convert all pages to contain the epochs, while still providing a path to more maintainable code in the future.\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 15:53:51 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, Jan 07, 2022 at 03:53:51PM +0000, Finnerty, Jim wrote:\n> I'd still like a plan to retire the \"double xmax\" representation eventually. Previously I suggested that this could be done as a post-process, before upgrade is complete, but that could potentially make upgrade very slow. \n> \n> Another way to retire the \"double xmax\" representation eventually could be to disallow \"double xmax\" pages in subsequent major version upgrades (e.g. to PG16, if \"double xmax\" pages are introduced in PG15). This gives the luxury of time after a fast upgrade to convert all pages to contain the epochs, while still providing a path to more maintainable code in the future.\n\nYes, but how are you planning to rewrite it? Is vacuum enough?\nI suppose it'd need FREEZE + DISABLE_PAGE_SKIPPING ?\n\nThis would preclude upgrading \"across\" v15. Maybe that'd be okay, but it'd be\na new and atypical restriction.\n\nHow would you enforce that it'd been run on v15 before upgrading to pg16 ?\n\nYou'd need to track whether vacuum had completed the necessary steps in pg15.\nI don't think it'd be okay to make pg_upgrade --check to read every tuple.\n\nThe \"keeping track\" part is what reminds me of the online checksum patch.\nIt'd be ideal if there were a generic solution to this kind of task, or at\nleast a \"model\" process to follow.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 7 Jan 2022 10:09:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On 07.01.22 06:18, Fujii Masao wrote:\n> On 2022/01/06 19:24, Simon Riggs wrote:\n>> On Thu, 30 Dec 2021 at 13:19, Maxim Orlov <orlovmg@gmail.com> wrote:\n>>\n>>> Your opinions are very much welcome!\n>>\n>> This is a review of the Int64 options patch,\n>> \"v6-0001-Add-64-bit-GUCs-for-xids.patch\"\n> \n> Do we really need to support both int32 and int64 options? Isn't it \n> enough to replace the existing int32 option with int64 one?\n\nI think that would create a lot of problems. You'd have to change every \nunderlying int variable to int64, and then check whether that causes any \nissues where they are used (wrong printf format, assignments, \noverflows), and you'd have to check whether the existing limits are \nstill appropriate. And extensions would be upset. This would be a big \nmess.\n\n> Or how about \n> using string-type option for very large number like 64-bit XID, like \n> it's done for recovery_target_xid?\n\nSeeing how many variables that contain transaction ID information \nactually exist, I think it could be worth introducing a new category as \nproposed. Otherwise, you'd have to write a lot of check and assign hooks.\n\nI do wonder whether signed vs. unsigned is handled correctly. \nTransaction IDs are unsigned, but all GUC handling is signed.\n\n\n", "msg_date": "Fri, 7 Jan 2022 17:14:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, Jan 7, 2022 at 03:53:51PM +0000, Finnerty, Jim wrote:\n> Re: The \"prepare\" approach was the first tried.\n> https://github.com/postgrespro/pg_pageprep But it appears to be\n> very difficult and unreliable. After investing many months into\n> pg_pageprep, \"double xmax\" approach appears to be very fast to\n> implement and reliable.\n>\n> I'd still like a plan to retire the \"double xmax\" representation\n> eventually. Previously I suggested that this could be done as a\n> post-process, before upgrade is complete, but that could potentially\n> make upgrade very slow.\n>\n> Another way to retire the \"double xmax\" representation eventually\n> could be to disallow \"double xmax\" pages in subsequent major version\n> upgrades (e.g. to PG16, if \"double xmax\" pages are introduced in\n> PG15). This gives the luxury of time after a fast upgrade to convert\n> all pages to contain the epochs, while still providing a path to more\n> maintainable code in the future.\n\nThis gets into the entire issue we have discussed in the past but never\nresolved --- how do we manage state changes in the Postgres file format\nwhile the server is running? pg_upgrade and pg_checksums avoid the\nproblem by doing such changes while the server is down, and other file\nformats have avoided it by allowing perpetual reading of the old format.\n\nAny such non-perpetual changes while the server is running must deal\nwith recording the start of the state change, the completion of it,\ncommunicating such state changes to all running backends in a\nsynchronous manner, and possible restarting of the state change.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 7 Jan 2022 11:52:00 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, 7 Jan 2022 at 16:09, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jan 07, 2022 at 03:53:51PM +0000, Finnerty, Jim wrote:\n> > I'd still like a plan to retire the \"double xmax\" representation eventually. Previously I suggested that this could be done as a post-process, before upgrade is complete, but that could potentially make upgrade very slow.\n> >\n> > Another way to retire the \"double xmax\" representation eventually could be to disallow \"double xmax\" pages in subsequent major version upgrades (e.g. to PG16, if \"double xmax\" pages are introduced in PG15). This gives the luxury of time after a fast upgrade to convert all pages to contain the epochs, while still providing a path to more maintainable code in the future.\n>\n> Yes, but how are you planning to rewrite it? Is vacuum enough?\n\nProbably not, but VACUUM is the place to add such code.\n\n> I suppose it'd need FREEZE + DISABLE_PAGE_SKIPPING ?\n\nYes\n\n> This would preclude upgrading \"across\" v15. Maybe that'd be okay, but it'd be\n> a new and atypical restriction.\n\nI don't see that restriction. Anyone upgrading from before PG15 would\napply the transform. Just because we introduce a transform in PG15\ndoesn't mean we can't apply that transform in later releases as well,\nto allow say PG14 -> PG16.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 7 Jan 2022 17:36:05 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, Jan 6, 2022 at 3:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Jan 4, 2022 at 9:40 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Could you tell me what happens if new tuple with XID larger than xid_base + 0xFFFFFFFF is inserted into the page? Such new tuple is not allowed to be inserted into that page?\n>\n> I fear that this patch will have many bugs along these lines. Example:\n> Why is it okay that convert_page() may have to defragment a heap page,\n> without holding a cleanup lock? That will subtly break code that holds\n> a pin on the buffer, when a tuple slot contains a C pointer to a\n> HeapTuple in shared memory (though only if we get unlucky).\n\nYeah. I think it's possible that some approach along the lines of what\nis proposed here can work, but quality of implementation is a big\nissue. This stuff is not easy to get right. Another thing that I'm\nwondering about is the \"double xmax\" representation. That requires\nsome way of distinguishing when that representation is in use. I'd be\ncurious to know where we found the bits for that -- the tuple header\nisn't exactly replete with extra bit space.\n\nAlso, if we have an epoch of some sort that is included in new page\nheaders but not old ones, that adds branches to code that might\nsometimes be quite hot. I don't know how much of a problem that is,\nbut it seems worth worrying about.\n\nFor all of that, I don't particularly agree with Jim Finnerty's idea\nthat we ought to solve the problem by forcing sufficient space to\nexist in the page pre-upgrade. There are some advantages to such\napproaches, but they make it really hard to roll out changes. You have\nto roll out the enabling change first, wait until everyone is running\na release that supports it, and only then release the technology that\nrequires the additional page space. Since we don't put new features\ninto back-branches -- and an on-disk format change would be a poor\nplace to start -- that would make rolling something like this out take\nmany years. I think we'll be much happier putting all the complexity\nin the new release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jan 2022 16:12:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Jan 5, 2022 at 9:53 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I see at least the following major issues/questions in this patch.\n> 1) Current code relies on the fact that TransactionId can be\n> atomically read from/written to shared memory. With 32-bit systems\n> and 64-bit TransactionId, that's not true anymore. Therefore, the\n> patch has concurrency issues on 32-bit systems. We need to carefully\n> review these issues and provide a fallback for 32-bit systems. I\n> suppose nobody is thinking about dropping off 32-bit systems, right?\n\nI think that's right. Not yet, anyway.\n\n> Also, I wonder how critical for us is the overhead for 32-bit systems.\n> They are going to become legacy, so overhead isn't so critical, right?\n\nAgreed.\n\n> 2) With this patch we still need to freeze to cut SLRUs. This is\n> especially problematic with Multixacts, because systems heavily using\n> row-level locks can consume an enormous amount of multixacts. That is\n> especially problematic when we have 2x bigger multixacts. We probably\n> can provide an alternative implementation for multixact vacuum, which\n> doesn't require scanning all the heaps. That is a pretty amount of\n> work though. The clog is much smaller and we can cut it more rarely.\n> Perhaps, we could tolerate freezing to cut clog, couldn't we?\n\nRight. We can't let any of the SLRUs -- don't forget about stuff like\npg_subtrans, which is a multiple of the size of clog -- grow without\nbound, even if it never forces a system shutdown. I'm not sure it's a\ngood idea to think about introducing new freezing mechanisms at the\nsame time as we're making other changes, though. Just removing the\npossibility of a wraparound shutdown without changing any of the rules\nabout how and when we freeze would be a significant advancement. Other\nchanges could be left for future work.\n\n> 3) 2x bigger in-memory representation of TransactionId have overheads.\n> In particular, it could mitigate the effect of recent advancements\n> from Andres Freund. I'm not exactly sure we should/can do something\n> with this. But I think this should be at least carefully analyzed.\n\nSeems fair.\n\n> 4) SP-GiST index stores TransactionId on pages. Could we tolerate\n> dropping SP-GiST indexes on a major upgrade? Or do we have to invent\n> something smarter?\n\nProbably depends on how much work it is. SP-GiST indexes are not\nmainstream, so I think we could at least consider breaking\ncompatibility, but it doesn't seem like a thing to do lightly.\n\n> 5) 32-bit limitation within the page mentioned upthread by Stephen\n> Frost should be also carefully considered. Ideally, we should get rid\n> of it, but I don't have particular ideas in this field for now. At\n> least, we should make sure we did our best at error reporting and\n> monitoring capabilities.\n\nI don't think I understand the thinking here. As long as we retain the\nexisting limit that the oldest running XID can't be more than 2\nbillion XIDs in the past, we can't ever need to throw an error. A new\npage modification that finds very old XIDs on the page can always\nescape trouble by pruning the page and freezing whatever old XIDs\nsurvive pruning.\n\nI would argue that it's smarter not to change the in-memory\nrepresentation of XIDs to 64-bit in places like the ProcArray. As you\nmention in (4), that might hurt performance. But also, the benefit is\nminimal. Nobody is really sad that they can't keep transactions open\nforever. They are sad because the system has severe bloat and/or shuts\ndown entirely. Some kind of change along these lines can fix the\nsecond of those problems, and that's progress.\n\n> I think the realistic goal for PG 15 development cycle would be\n> agreement on a roadmap for all the items above (and probably some\n> initial implementations).\n\n+1. Trying to rush something through to commit is just going to result\nin a bunch of bugs. We need to work through the issues carefully and\ntake the time to do it well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jan 2022 16:22:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jan 5, 2022 at 9:53 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > 5) 32-bit limitation within the page mentioned upthread by Stephen\n> > Frost should be also carefully considered. Ideally, we should get rid\n> > of it, but I don't have particular ideas in this field for now. At\n> > least, we should make sure we did our best at error reporting and\n> > monitoring capabilities.\n> \n> I don't think I understand the thinking here. As long as we retain the\n> existing limit that the oldest running XID can't be more than 2\n> billion XIDs in the past, we can't ever need to throw an error. A new\n> page modification that finds very old XIDs on the page can always\n> escape trouble by pruning the page and freezing whatever old XIDs\n> survive pruning.\n\nSo we'll just fail such an old transaction? Or force a server restart?\nor..? What if we try to signal that transaction and it doesn't go away?\n\n> I would argue that it's smarter not to change the in-memory\n> representation of XIDs to 64-bit in places like the ProcArray. As you\n> mention in (4), that might hurt performance. But also, the benefit is\n> minimal. Nobody is really sad that they can't keep transactions open\n> forever. They are sad because the system has severe bloat and/or shuts\n> down entirely. Some kind of change along these lines can fix the\n> second of those problems, and that's progress.\n\nI brought up the concern that I did because I would be a bit sad if I\ncouldn't have a transaction open for a day on a very high rate system of\nthe type being discussed here. Would be fantastic if we had a solution\nto that issue, but I get that reducing the need to vacuum and such would\nbe a really nice improvement even if we can't make long running\ntransactions work. Then again, if we do actually change the in-memory\nbits- then maybe we could have such a long running transaction, provided\nit didn't try to make an update to a page with really old xids on it,\nwhich might be entirely reasonable in a lot of cases. I do still worry\nabout how we explain what the limitation here is and how to avoid\nhitting it. Definitely seems like a 'gotcha' that people are going to\ncomplain about, though hopefully not as much of one as the current cases\nwe hear about of vacuum falling behind and the system running out of\nxids.\n\n> > I think the realistic goal for PG 15 development cycle would be\n> > agreement on a roadmap for all the items above (and probably some\n> > initial implementations).\n> \n> +1. Trying to rush something through to commit is just going to result\n> in a bunch of bugs. We need to work through the issues carefully and\n> take the time to do it well.\n\n+1.\n\nThanks,\n\nStephen", "msg_date": "Fri, 7 Jan 2022 17:46:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> I'd be\n> curious to know where we found the bits for that -- the tuple header\n> isn't exactly replete with extra bit space.\n>\n\n+1 - and can we somehow shoehorn in a version # into the new format so we\nnever have to look for spare bits again.\n\n I'd be\ncurious to know where we found the bits for that -- the tuple header\nisn't exactly replete with extra bit space.+1 - and can we somehow shoehorn in a version # into the new format so we never have to look for spare bits again.", "msg_date": "Fri, 7 Jan 2022 18:36:11 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> Perhaps we can merge some of the code cleanup that it contained, such\n> as using XID_FMT everywhere and creating a type for the kind of page\n> returned by TransactionIdToPage() to make the code cleaner.\n>\n\nAgree, I think this is a good idea.\n\n\n> Is your patch functionally the same as the PostgresPro\n> implementation?\n>\n\nYes, it is. It basically is PostgresPro implementation, not a concept or\nsmth.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\n   Perhaps we can merge some of the code cleanup that it contained, such as using XID_FMT everywhere and creating a type for the kind of page returned by TransactionIdToPage() to make the code cleaner. Agree, I think this is a good idea.     Is your patch functionally the same as the PostgresPro implementation?  \n\n\n\n\nYes, it is. It basically is PostgresPro implementation, not a concept or smth. -- Best regards,Maxim Orlov.", "msg_date": "Sat, 8 Jan 2022 11:21:20 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "> Is there any documentation or README explaining this whole 64-bit XID\n> mechanism?\n>\nThere is none, unfortunately. I would come back to this later.\n\n\n> Could you tell me what happens if new tuple with XID larger than xid_base\n> + 0xFFFFFFFF is inserted into the page? Such new tuple is not allowed to be\n> inserted into that page? Or xid_base and xids of all existing tuples in the\n> page are increased? Also what happens if one of those xids (of existing\n> tuples) cannot be changed because the tuple still can be seen by\n> very-long-running transaction?\n>\nAll this mechanism is around heap_insert/heap_update by\ncalling heap_page_prepare_for_xid() and if it fails (due to tuple still\nvisible) error is raised. Also If xid_base shift is not viable, it will try\nto remove old tuples.\n\n-- \nBest regards,\nMaxim Orlov.\n\nIs there any documentation or README explaining this whole 64-bit XID mechanism?There is none, unfortunately. I would come back to this later. \nCould you tell me what happens if new tuple with XID larger than xid_base + 0xFFFFFFFF is inserted into the page? Such new tuple is not allowed to be inserted into that page? Or xid_base and xids of all existing tuples in the page are increased? Also what happens if one of those xids (of existing tuples) cannot be changed because the tuple still can be seen by very-long-running transaction?All this mechanism is around heap_insert/heap_update by calling heap_page_prepare_for_xid() and if it fails (due to tuple still visible) error is raised. Also If xid_base shift is not viable, it will try to remove old tuples.\n-- Best regards,Maxim Orlov.", "msg_date": "Sat, 8 Jan 2022 11:48:41 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Sat, 8 Jan 2022 at 08:21, Maxim Orlov <orlovmg@gmail.com> wrote:\n>>\n>> Perhaps we can merge some of the code cleanup that it contained, such as using XID_FMT everywhere and creating a type for the kind of page returned by TransactionIdToPage() to make the code cleaner.\n>\n>\n> Agree, I think this is a good idea.\n\nLooks to me like the best next actions would be:\n\n1. Submit a patch that uses XID_FMT everywhere, as a cosmetic change.\nThis looks like it will reduce the main patch size considerably and\nmake it much less scary. That can be cleaned up and committed while we\ndiscuss the main approach.\n\n2. Write up the approach in a detailed README, so people can\nunderstand the proposal and assess if there are problems. A few short\nnotes and a link back to old conversations isn't enough to allow wide\nreview and give confidence on such a major patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Jan 2022 13:26:57 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Re: patch that uses XID_FMT everywhere ... to make the main patch much smaller\r\n\r\nThat's exactly what my previous patch did, plus the patch to support 64-bit GUCs.\r\n\r\nMaxim, maybe it's still a good idea to isolate those two patches and submit them separately first, to reduce the size of the rest of the patch?\r\n\r\nOn 1/12/22, 8:28 AM, \"Simon Riggs\" <simon.riggs@enterprisedb.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n On Sat, 8 Jan 2022 at 08:21, Maxim Orlov <orlovmg@gmail.com> wrote:\r\n >>\r\n >> Perhaps we can merge some of the code cleanup that it contained, such as using XID_FMT everywhere and creating a type for the kind of page returned by TransactionIdToPage() to make the code cleaner.\r\n >\r\n >\r\n > Agree, I think this is a good idea.\r\n\r\n Looks to me like the best next actions would be:\r\n\r\n 1. Submit a patch that uses XID_FMT everywhere, as a cosmetic change.\r\n This looks like it will reduce the main patch size considerably and\r\n make it much less scary. That can be cleaned up and committed while we\r\n discuss the main approach.\r\n\r\n 2. Write up the approach in a detailed README, so people can\r\n understand the proposal and assess if there are problems. A few short\r\n notes and a link back to old conversations isn't enough to allow wide\r\n review and give confidence on such a major patch.\r\n\r\n --\r\n Simon Riggs http://www.EnterpriseDB.com/\r\n\r\n\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 13:32:16 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> Maxim, maybe it's still a good idea to isolate those two patches and\n> submit them separately first, to reduce the size of the rest of the patch?\n>\n\n\n> Looks to me like the best next actions would be:\n>\n> 1. Submit a patch that uses XID_FMT everywhere, as a cosmetic change.\n> This looks like it will reduce the main patch size considerably and\n> make it much less scary. That can be cleaned up and committed while we\n> discuss the main approach.\n>\n> 2. Write up the approach in a detailed README, so people can\n> understand the proposal and assess if there are problems. A few short\n> notes and a link back to old conversations isn't enough to allow wide\n> review and give confidence on such a major patch.\n>\n\nBig thanks to all for your ideas!\n\nWe intend to do the following work on the patch soon:\n1. Write a detailed README\n2. Split the patch into several pieces including a separate part for\nXID_FMT. But if committers meanwhile choose to commit Jim's XID_FMT patch\nwe also appreciate this and will rebase our patch accordingly.\n 2A. Probably refactor it to store precalculated XMIN/XMAX in memory\ntuple representation instead of t_xid_base/t_multi_base\n 2B. Split the in-memory part of a patch as a separate\n3. Construct some variants for leaving \"double xmax\" format as a temporary\none just after upgrade for having only one persistent on-disk format\ninstead of two.\n 3A. By using SQL function \"vacuum doublexmax;\"\nOR\n 3B. By freeing space on all heap pages for pd_special before\npg-upgrade.\nOR\n 3C. By automatically repacking all \"double xmax\" pages after upgrade\n(with a priority specified by common vacuum-related GUCs)\n4. Intentionally prohibit starting a new transaction with XID difference of\nmore than 2^32 from the oldest currently running one. This is to enforce\nsome dba's action for cleaning defunct transaction but not binding one:\nhe/she can wait if they consider these old transactions not defunct.\n5. Investigate and add a solution for archs without 64-bit atomic values.\n 5A. Provide XID 8-byte alignment for systems where 64-bit atomics is\nprovided for 8-byte aligned values.\n 5B. Wrap XID reading into PG atomic locks for remaining 32-bit ones\n(they are expected to be rare).\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nMaxim, maybe it's still a good idea to isolate those two patches and submit them separately first, to reduce the size of the rest of the patch? \n    Looks to me like the best next actions would be:\n\n    1. Submit a patch that uses XID_FMT everywhere, as a cosmetic change.\n    This looks like it will reduce the main patch size considerably and\n    make it much less scary. That can be cleaned up and committed while we\n    discuss the main approach.\n\n    2. Write up the approach in a detailed README, so people can\n    understand the proposal and assess if there are problems. A few short\n    notes and a link back to old conversations isn't enough to allow wide\n    review and give confidence on such a major patch.Big thanks to all for your ideas!We intend to do the following work on the patch soon:1. Write a detailed README2. Split the patch into several pieces including a separate part for XID_FMT. But if committers meanwhile choose to commit Jim's XID_FMT patch we also appreciate this and will rebase our patch accordingly.     2A. Probably refactor it to store precalculated XMIN/XMAX in memory tuple representation instead of t_xid_base/t_multi_base     2B. Split the in-memory part of a patch as a separate3. Construct some variants for leaving \"double xmax\" format as a temporary one just after upgrade for having only one persistent on-disk format instead of two.     3A. By using SQL function \"vacuum doublexmax;\"OR     3B. By freeing space on all heap pages for pd_special before pg-upgrade.OR     3C. By automatically repacking all \"double xmax\" pages after upgrade (with a priority specified by common vacuum-related GUCs)4. Intentionally prohibit starting a new transaction with XID difference of more than 2^32 from the oldest currently running one. This is to enforce some dba's action for cleaning defunct transaction but not binding one: he/she can wait if they consider these old transactions not defunct.5. Investigate and add a solution for archs without 64-bit atomic values.        5A. Provide XID 8-byte alignment for systems where 64-bit atomics is provided for 8-byte aligned values.        5B. Wrap XID reading into PG atomic locks for remaining 32-bit ones (they are expected to be rare).--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 12 Jan 2022 18:03:11 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> We intend to do the following work on the patch soon:\n> 1. Write a detailed README\n> 2. Split the patch into several pieces including a separate part for\n> XID_FMT. But if committers meanwhile choose to commit Jim's XID_FMT patch\n> we also appreciate this and will rebase our patch accordingly.\n> 2A. Probably refactor it to store precalculated XMIN/XMAX in memory\n> tuple representation instead of t_xid_base/t_multi_base\n> 2B. Split the in-memory part of a patch as a separate\n> 3. Construct some variants for leaving \"double xmax\" format as a temporary\n> one just after upgrade for having only one persistent on-disk format\n> instead of two.\n> 3A. By using SQL function \"vacuum doublexmax;\"\n> OR\n> 3B. By freeing space on all heap pages for pd_special before\n> pg-upgrade.\n> OR\n> 3C. By automatically repacking all \"double xmax\" pages after upgrade\n> (with a priority specified by common vacuum-related GUCs)\n> 4. Intentionally prohibit starting a new transaction with XID difference\n> of more than 2^32 from the oldest currently running one. This is to enforce\n> some dba's action for cleaning defunct transaction but not binding one:\n> he/she can wait if they consider these old transactions not defunct.\n> 5. Investigate and add a solution for archs without 64-bit atomic values.\n> 5A. Provide XID 8-byte alignment for systems where 64-bit atomics\n> is provided for 8-byte aligned values.\n> 5B. Wrap XID reading into PG atomic locks for remaining 32-bit ones\n> (they are expected to be rare).\n>\n\nHi, hackers!\n\nPFA patch with README for 64xid proposal. It is 0003 patch of the same v6,\nthat was proposed earlier [1].\nAs always, I very much appreciate your ideas on this readme patch, on\noverall 64xid patch [1], and on the roadmap on its improvement quoted above.\n\n [1]\nhttps://www.postgresql.org/message-id/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Fri, 14 Jan 2022 23:38:46 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 14, 2022 at 11:38:46PM +0400, Pavel Borisov wrote:\n> \n> PFA patch with README for 64xid proposal. It is 0003 patch of the same v6,\n> that was proposed earlier [1].\n> As always, I very much appreciate your ideas on this readme patch, on\n> overall 64xid patch [1], and on the roadmap on its improvement quoted above.\n\nThanks for adding this documentation! Unfortunately, the cfbot can't apply a\npatchset split in multiple emails, so for now you only get coverage for this\nnew readme file, as this is what's being tested on the CI:\nhttps://github.com/postgresql-cfbot/postgresql/commit/f8f12ce29344bc7c72665c334b5eb40cee22becd\n\nCould you send the full patchset each time you make a modification? For now\nI'm simply attaching 0001, 0002 and 0003 to make sure that the cfbot will pick\nall current patches on its next run.", "msg_date": "Sat, 15 Jan 2022 10:31:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "I tried to pg_upgrade from a v13 instance like:\ntime make check -C src/bin/pg_upgrade oldsrc=`pwd`/13 oldbindir=`pwd`/13/tmp_install/usr/local/pgsql/bin\n\nI had compiled and installed v13 into `pwd`/13.\n\nFirst, test.sh failed, because of an option in initdb which doesn't exist in\nthe old version: -x 21000000000\n\nI patched test.sh so the option is used only the \"new\" version.\n\nThe tab_core_types table has an XID column, so pg_upgrade --check complains and\nrefuses to run. If I drop it, then pg_upgrade runs, but then fails like this:\n\n|Files /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/new_xids.txt and /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/old_xids.txt differ\n|See /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/xids.diff\n|\n|--- /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/new_xids.txt 2022-01-15 00:14:23.035294414 -0600\n|+++ /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/old_xids.txt 2022-01-15 00:13:59.634945012 -0600\n|@@ -1,5 +1,5 @@\n| relfrozenxid | relminmxid \n| --------------+------------\n|- 3 | 3\n|+ 15594 | 3\n| (1 row)\n\nAlso, the patch needs to be rebased over Peter's vacuum changes.\n\nHere's the changes I used for my test:\n\ndiff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh\nindex c6361e3c085..5eae42192b6 100644\n--- a/src/bin/pg_upgrade/test.sh\n+++ b/src/bin/pg_upgrade/test.sh\n@@ -24,7 +24,8 @@ standard_initdb() {\n \t# without increasing test runtime, run these tests with a custom setting.\n \t# Also, specify \"-A trust\" explicitly to suppress initdb's warning.\n \t# --allow-group-access and --wal-segsize have been added in v11.\n-\t\"$1\" -N --wal-segsize 1 --allow-group-access -A trust -x 21000000000\n+\t\"$@\" -N --wal-segsize 1 --allow-group-access -A trust\n+\n \tif [ -n \"$TEMP_CONFIG\" -a -r \"$TEMP_CONFIG\" ]\n \tthen\n \t\tcat \"$TEMP_CONFIG\" >> \"$PGDATA/postgresql.conf\"\n@@ -237,7 +238,7 @@ fi\n \n PGDATA=\"$BASE_PGDATA\"\n \n-standard_initdb 'initdb'\n+standard_initdb 'initdb' -x 21000000000\n \n pg_upgrade $PG_UPGRADE_OPTS --no-sync -d \"${PGDATA}.old\" -D \"$PGDATA\" -b \"$oldbindir\" -p \"$PGPORT\" -P \"$PGPORT\"\n \ndiff --git a/src/bin/pg_upgrade/upgrade_adapt.sql b/src/bin/pg_upgrade/upgrade_adapt.sql\nindex 27c4c7fd011..c5ce8bc95b2 100644\n--- a/src/bin/pg_upgrade/upgrade_adapt.sql\n+++ b/src/bin/pg_upgrade/upgrade_adapt.sql\n@@ -89,3 +89,5 @@ DROP OPERATOR public.#%# (pg_catalog.int8, NONE);\n DROP OPERATOR public.!=- (pg_catalog.int8, NONE);\n DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);\n \\endif\n+\n+DROP TABLE IF EXISTS tab_core_types;\n\n\n", "msg_date": "Sat, 15 Jan 2022 00:39:25 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Jan 05, 2022 at 06:51:37PM -0500, Bruce Momjian wrote:\n> On Tue, Jan 4, 2022 at 10:22:50PM +0000, Finnerty, Jim wrote:\n\n[skipped]\n\n> > with the \"double-xmax\" representation. This would eliminate a whole\n> > class of coding errors and would make the code dealing with 64-bit\n> > XIDs simpler and more maintainable.\n> \n> Well, yes, we could do this, and it would avoid the complexity of having\n> to support two XID representations, but we would need to accept that\n> fast pg_upgrade would be impossible in such cases, since every page\n> would need to be checked and potentially updated.\n> \n> You might try to do this while the server is first started and running\n> queries, but I think we found out from the online checkpoint patch that\n> having the server in an intermediate state while running queries is very\n> complex --- it might be simpler to just accept two XID formats all the\n> time than enabling the server to run with two formats for a short\n> period. My big point is that this needs more thought.\n\n Probably, some table storage housekeeping would be wanted.\n\n Like a column in pg_class describing the current set of options\nof the table: checksums added, 64-bit xids added, type of 64-bit\nxids (probably some would want to add support for the pgpro up-\ngrades), some set of defaults to not include a lot of them in all\npageheaders -- like compressed xid/integer formats or extended\npagesize.\n\n And separate tables that describe the transition state --\nlike when adding checksums, the desired state for the relation\n(checksums), and a set of ranges in the table files that are al-\nready transitioned/checked.\n\n That probably will not introduce too much slowdown at least on\nreading, and will add the transition/upgrade mechanics.\n\n\n Aren't there were already some discussions about such a feature\nin the mailing lists?\n\n> \n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> If only the physical world exists, free will is an illusion.\n> \n> \n\n\n", "msg_date": "Mon, 17 Jan 2022 09:46:55 +0300", "msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, hackers!\n\nDue to recent upstream changes patch v6 doesn't apply anymore. Rebased it\nwithout major modifications. PFA v7 of a patch.\n\nThe next changes mentioned in [1] are upcoming.\n\nPlease feel free to discuss readme and your opinions on the current patch\nand proposed changes [1].\n\n[1]\nhttps://www.postgresql.org/message-id/CALT9ZEHy9yFQEwptCUznPLciqM9ZSs91yTnNSSiG22m%3DBgCpNA%40mail.gmail.com\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Mon, 24 Jan 2022 16:38:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Pavel,\n\n> Please feel free to discuss readme and your opinions on the current patch and proposed changes [1].\n\nJust a quick question about this design choice:\n\n> On-disk tuple format remains unchanged. 32-bit t_xmin and t_xmax store the\n> lower parts of 64-bit XMIN and XMAX values. Each heap page has additional\n> 64-bit pd_xid_base and pd_multi_base which are common for all tuples on a page.\n> They are placed into a pd_special area - 16 bytes in the end of a heap page.\n> Actual XMIN/XMAX for a tuple are calculated upon reading a tuple from a page\n> as follows:\n>\n> XMIN = t_xmin + pd_xid_base.\n> XMAX = t_xmax + pd_xid_base/pd_multi_base.\n\nDid you consider using 4 bytes for pd_xid_base and another 4 bytes for\n(pd_xid_base/pd_multi_base)? This would allow calculating XMIN/XMAX\nas:\n\nXMIN = (t_min_extra_bits << 32) | t_xmin\nXMAX = (t_max_extra_bits << 32) | t_xmax\n\n... and save 8 extra bytes in the pd_special area. Or maybe I'm\nmissing some context here?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 28 Jan 2022 17:30:17 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> Did you consider using 4 bytes for pd_xid_base and another 4 bytes for\n> (pd_xid_base/pd_multi_base)? This would allow calculating XMIN/XMAX\n> as:\n>\n> XMIN = (t_min_extra_bits << 32) | t_xmin\n> XMAX = (t_max_extra_bits << 32) | t_xmax\n>\n> ... and save 8 extra bytes in the pd_special area. Or maybe I'm\n> missing some context here?\n>\nHi, Alexander!\n\nIn current design it is not possible, as pd_xid_base is roughly just a\nminimum 64-xid of all tuples that may fit this page. So we do not make any\nextra guess that it should be in multiples of 2^32.\n\nIf we make pd_xid_base in multiples of 2^32 then after current XID crosses\nthe border of 2^32 then pages that contains tuples with XMIN/XMAX before\nthis point are not suitable for tuple inserts anymore. In effect we will\nthen have \"sets\" of the pages for each 2^32 \"epoch\" with freed space that\ncan not be used anymore.\n\nI think it's too big a loss for gain of just 8 bytes per page.\n\nThank you for your dive into this matter!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nDid you consider using 4 bytes for pd_xid_base and another 4 bytes for\n(pd_xid_base/pd_multi_base)? This would allow calculating XMIN/XMAX\nas:\n\nXMIN = (t_min_extra_bits << 32) | t_xmin\nXMAX = (t_max_extra_bits << 32) | t_xmax\n\n... and save 8 extra bytes in the pd_special area. Or maybe I'm\nmissing some context here?Hi, Alexander!In current design it is not possible, as pd_xid_base is roughly just a minimum 64-xid of all tuples that may fit this page. So we do not make any extra guess that it should be in multiples of 2^32.If we make pd_xid_base in multiples of 2^32 then after current XID crosses the border of 2^32 then pages that contains tuples with XMIN/XMAX before this point are not suitable for tuple inserts anymore. In effect we will then have \"sets\" of the pages for each 2^32 \"epoch\" with freed space that can not be used anymore.I think it's too big a loss for gain of just 8 bytes per page.Thank you for your dive into this matter!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 28 Jan 2022 18:43:06 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn 2022-01-24 16:38:54 +0400, Pavel Borisov wrote:\n> +64-bit Transaction ID's (XID)\n> +=============================\n> +\n> +A limited number (N = 2^32) of XID's required to do vacuum freeze to prevent\n> +wraparound every N/2 transactions. This causes performance degradation due\n> +to the need to exclusively lock tables while being vacuumed. In each\n> +wraparound cycle, SLRU buffers are also being cut.\n\nWhat exclusive lock?\n\n\n> +\"Double XMAX\" page format\n> +---------------------------------\n> +\n> +At first read of a heap page after pg_upgrade from 32-bit XID PostgreSQL\n> +version pd_special area with a size of 16 bytes should be added to a page.\n> +Though a page may not have space for this. Then it can be converted to a\n> +temporary format called \"double XMAX\".\n>\n> +All tuples after pg-upgrade would necessarily have xmin = FrozenTransactionId.\n\nWhy would a tuple after pg-upgrade necessarily have xmin =\nFrozenTransactionId? A pg_upgrade doesn't scan the tables, so the pg_upgrade\nitself doesn't do anything to xmins.\n\nI guess you mean that the xmin cannot be needed anymore, because no older\ntransaction can be running?\n\n\n> +In-memory tuple format\n> +----------------------\n> +\n> +In-memory tuple representation consists of two parts:\n> +- HeapTupleHeader from disk page (contains all heap tuple contents, not only\n> +header)\n> +- HeapTuple with additional in-memory fields\n> +\n> +HeapTuple for each tuple in memory stores t_xid_base/t_multi_base - a copies of\n> +page's pd_xid_base/pd_multi_base. With tuple's 32-bit t_xmin and t_xmax from\n> +HeapTupleHeader they are used to calculate actual 64-bit XMIN and XMAX:\n> +\n> +XMIN = t_xmin + t_xid_base. \t\t\t\t\t(3)\n> +XMAX = t_xmax + t_xid_base/t_multi_base.\t\t(4)\n\nWhat identifies a HeapTuple as having this additional data?\n\n\n> +The downside of this is that we can not use tuple's XMIN and XMAX right away.\n> +We often need to re-read t_xmin and t_xmax - which could actually be pointers\n> +into a page in shared buffers and therefore they could be updated by any other\n> +backend.\n\nUgh, that's not great.\n\n\n> +Upgrade from 32-bit XID versions\n> +--------------------------------\n> +\n> +pg_upgrade doesn't change pages format itself. It is done lazily after.\n> +\n> +1. At first heap page read, tuples on a page are repacked to free 16 bytes\n> +at the end of a page, possibly freeing space from dead tuples.\n\nThat will cause a *massive* torrent of writes after an upgrade. Isn't this\npractically making pg_upgrade useless? Imagine a huge cluster where most of\nthe pages are all-frozen, upgraded using link mode.\n\n\nWhat happens if the first access happens on a replica?\n\n\nWhat is the approach for dealing with multixact files? They have xids\nembedded? And currently the SLRUs will break if you just let the offsets SLRU\ngrow without bounds.\n\n\n\n> +void\n> +convert_page(Relation rel, Page page, Buffer buf, BlockNumber blkno)\n> +{\n> +\tPageHeader\thdr = (PageHeader) page;\n> +\tGenericXLogState *state = NULL;\n> +\tPage\ttmp_page = page;\n> +\tuint16\tchecksum;\n> +\n> +\tif (!rel)\n> +\t\treturn;\n> +\n> +\t/* Verify checksum */\n> +\tif (hdr->pd_checksum)\n> +\t{\n> +\t\tchecksum = pg_checksum_page((char *) page, blkno);\n> +\t\tif (checksum != hdr->pd_checksum)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INDEX_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"page verification failed, calculated checksum %u but expected %u\",\n> +\t\t\t\t\t\t\tchecksum, hdr->pd_checksum)));\n> +\t}\n> +\n> +\t/* Start xlog record */\n> +\tif (!XactReadOnly && XLogIsNeeded() && RelationNeedsWAL(rel))\n> +\t{\n> +\t\tstate = GenericXLogStart(rel);\n> +\t\ttmp_page = GenericXLogRegisterBuffer(state, buf, GENERIC_XLOG_FULL_IMAGE);\n> +\t}\n> +\n> +\tPageSetPageSizeAndVersion((hdr), PageGetPageSize(hdr),\n> +\t\t\t\t\t\t\t PG_PAGE_LAYOUT_VERSION);\n> +\n> +\tif (was_32bit_xid(hdr))\n> +\t{\n> +\t\tswitch (rel->rd_rel->relkind)\n> +\t\t{\n> +\t\t\tcase 'r':\n> +\t\t\tcase 'p':\n> +\t\t\tcase 't':\n> +\t\t\tcase 'm':\n> +\t\t\t\tconvert_heap(rel, tmp_page, buf, blkno);\n> +\t\t\t\tbreak;\n> +\t\t\tcase 'i':\n> +\t\t\t\t/* no need to convert index */\n> +\t\t\tcase 'S':\n> +\t\t\t\t/* no real need to convert sequences */\n> +\t\t\t\tbreak;\n> +\t\t\tdefault:\n> +\t\t\t\telog(ERROR,\n> +\t\t\t\t\t \"Conversion for relkind '%c' is not implemented\",\n> +\t\t\t\t\t rel->rd_rel->relkind);\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * Mark buffer dirty unless this is a read-only transaction (e.g. query\n> +\t * is running on hot standby instance)\n> +\t */\n> +\tif (!XactReadOnly)\n> +\t{\n> +\t\t/* Finish xlog record */\n> +\t\tif (XLogIsNeeded() && RelationNeedsWAL(rel))\n> +\t\t{\n> +\t\t\tAssert(state != NULL);\n> +\t\t\tGenericXLogFinish(state);\n> +\t\t}\n> +\n> +\t\tMarkBufferDirty(buf);\n> +\t}\n> +\n> +\thdr = (PageHeader) page;\n> +\thdr->pd_checksum = pg_checksum_page((char *) page, blkno);\n> +}\n\nWait. So you just modify the page without WAL logging or marking it dirty on a\nstandby? I fail to see how that can be correct.\n\nImagine the cluster is promoted, the page is dirtied, and we write it\nout. You'll have written out a completely changed page, without any WAL\nlogging. There's plenty other scenarios.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:43:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, Andres!\n\nI've revised the README a little bit to address your corrections and\nquestions. Thanks for this very much!\nA patchset with changed README is attached as v8 here (the code is\nunchanged and identical to v7).\n\n\n> > +The downside of this is that we can not use tuple's XMIN and XMAX right\n> away.\n> > +We often need to re-read t_xmin and t_xmax - which could actually be\n> pointers\n> > +into a page in shared buffers and therefore they could be updated by\n> any other\n> > +backend.\n>\n> Ugh, that's not great.\n>\nAgree. This part is one of the candidates for revision as per proposals\nabove [1] i.e :\n\"2A. Probably refactor it to store precalculated XMIN/XMAX in memory\ntuple representation instead of t_xid_base/t_multi_base\".\n\nWe are working on this change.\n\n\n> What happens if the first access happens on a replica?\n>\n> What is the approach for dealing with multixact files? They have xids\n> embedded? And currently the SLRUs will break if you just let the offsets\n> SLRU\n> grow without bounds.\n>\n> Wait. So you just modify the page without WAL logging or marking it dirty\n> on a\n> standby? I fail to see how that can be correct.\n>\n> Imagine the cluster is promoted, the page is dirtied, and we write it\n> out. You'll have written out a completely changed page, without any WAL\n> logging. There's plenty other scenarios.\n>\nIn this part, I suppose you've found a definite bug. Thanks! There are a\ncouple\nof ways how it could be fixed:\n\n1. If we enforce checkpoint at replica promotion then we force full-page\nwrites after each page modification afterward.\n\n2. Maybe it's worth using BufferDesc bit to mark the page as converted to\n64xid but not yet written to disk? For example, one of four bits from\nBUF_USAGECOUNT.\nBM_MAX_USAGE_COUNT = 5 so it will be enough 3 bits to store it. This will\nchange in-memory page representation but will not need WAL-logging which is\nimpossible on a replica.\n\nWhat do you think about it?\n\n[1]\nhttps://www.postgresql.org/message-id/CALT9ZEHy9yFQEwptCUznPLciqM9ZSs91yTnNSSiG22m%3DBgCpNA%40mail.gmail.com", "msg_date": "Wed, 2 Feb 2022 19:10:23 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> In this part, I suppose you've found a definite bug. Thanks! There are a couple\n> of ways how it could be fixed:\n>\n> 1. If we enforce checkpoint at replica promotion then we force full-page writes after each page modification afterward.\n>\n> 2. Maybe it's worth using BufferDesc bit to mark the page as converted to 64xid but not yet written to disk? For example, one of four bits from BUF_USAGECOUNT.\n> BM_MAX_USAGE_COUNT = 5 so it will be enough 3 bits to store it. This will change in-memory page representation but will not need WAL-logging which is impossible on a replica.\n>\n> What do you think about it?\n\nI'm having difficulties merging and/or testing\nv8-0002-Add-64bit-xid.patch since I'm not 100% sure which commit this\npatch was targeting. Could you please submit a rebased patch and/or\nshare your development branch on GitHub?\n\nI agree with Bruce it would be great to deliver this in PG15. Please\nlet me know if you believe it's unrealistic for any reason so I will\nfocus on testing and reviewing other patches.\n\nFor now, I'm changing the status of the patch to \"Waiting on Author\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 2 Mar 2022 16:25:33 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers!\n\nHi! Here is the rebased version.\n>\nI'd like to add a description of what was done in v9:\n- The patch is rebased on current master branch\n- In-memory tuple storage format was refactored as promised to have\npre-calculated 64bit xmin and xmax, not just copies of pd_xid_base and\npd_multi_base.\n- Fixed bug reported by Andres Freund, with lazy conversion of pages\nupgraded from 32 to 64 xid when first tuple read (and therefore lazy\nconversion) is done in read-only state (read-only xact or on replica). In\nthis case now in memory buffer descriptor will be marked\nwith REGBUF_CONVERTED flag. When cluster comes to read-write state this\nwill lead to emitting full page write xlog instruction for this page.\n\nRelevant changes in README are also done.\n\nWe'd very much appreciate enthusiasm to have 64 bit xid's in PG15 and any\neffort to review and test this feature.\n\nAlexander, thanks for your attention to the patchset. Your questions and\nreview are very much welcome!\nThe participation of other hackers is highly appreciated as always!\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi hackers!Hi! Here is the rebased version.I'd like to add a description of what was done in v9:- The patch is rebased on current master branch- In-memory tuple storage format was refactored as promised to have pre-calculated 64bit xmin and xmax, not just copies of pd_xid_base and pd_multi_base.- Fixed bug reported by Andres Freund, with lazy conversion of pages upgraded from 32 to 64 xid when first tuple read (and therefore lazy conversion) is done in read-only state (read-only xact or on replica). In this case now in memory buffer descriptor will be marked with REGBUF_CONVERTED flag. When cluster comes to read-write state this will lead to emitting full page write xlog instruction for this page.Relevant changes in README are also done.We'd very much appreciate enthusiasm to have 64 bit xid's in PG15 and any effort to review and test this feature.Alexander, thanks for your attention to the patchset. Your questions and review are very much welcome!The participation of other hackers is highly appreciated as always!--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 2 Mar 2022 18:43:11 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Mar 02, 2022 at 06:43:11PM +0400, Pavel Borisov wrote:\n> Hi hackers!\n> \n> Hi! Here is the rebased version.\n\nThe patch doesn't apply - I suppose the patch is relative a forked postgres\nwhich already has other patches.\n\nhttp://cfbot.cputube.org/pavel-borisov.html\n\nNote also that I mentioned an issue with pg_upgrade. Handling that that well\nis probably the most important part of the patch.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:35:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn 2022-03-02 16:25:33 +0300, Aleksander Alekseev wrote:\n> I agree with Bruce it would be great to deliver this in PG15.\n\n> Please let me know if you believe it's unrealistic for any reason so I will\n> focus on testing and reviewing other patches.\n\nI don't see 15 as a realistic target for this patch. There's huge amounts of\nwork left, it has gotten very little review.\n\nI encourage trying to break down the patch into smaller incrementally useful\npieces. E.g. making all the SLRUs 64bit would be a substantial and\nindependently committable piece.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Mar 2022 13:22:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> The patch doesn't apply - I suppose the patch is relative a forked postgres\n\nNo, the authors just used a little outdated `master` branch. I\nsuccessfully applied it against 31d8d474 and then rebased to the\nlatest master (62ce0c75). The new version is attached.\n\nNot 100% sure if my rebase is correct since I didn't invest too much\ntime into reviewing the code. But at least it passes `make\ninstallcheck` locally. Let's see what cfbot will tell us.\n\n> I encourage trying to break down the patch into smaller incrementally useful\n> pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> independently committable piece.\n\nCompletely agree. And the changes like:\n\n+#if 0 /* XXX remove unit tests */\n\n... suggest that the patch is pretty raw in its current state.\n\nPavel, Maxim, don't you mind me splitting the patchset, or would you\nlike to do it yourself and/or maybe include more changes? I don't know\nhow actively you are working on this.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 3 Mar 2022 14:07:25 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> > The patch doesn't apply - I suppose the patch is relative a forked\n> postgres\n>\n> No, the authors just used a little outdated `master` branch. I\n> successfully applied it against 31d8d474 and then rebased to the\n> latest master (62ce0c75). The new version is attached.\n\nNot 100% sure if my rebase is correct since I didn't invest too much\n> time into reviewing the code. But at least it passes `make\n> installcheck` locally. Let's see what cfbot will tell us.\n>\nThank you very much! We'll do the same rebase and check this soon. Let's\nuse v10 now.\n\n\n> > I encourage trying to break down the patch into smaller incrementally\n> useful\n> > pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> > independently committable piece.\n>\n> Completely agree. And the changes like:\n>\n> +#if 0 /* XXX remove unit tests */\n>\n> ... suggest that the patch is pretty raw in its current state.\n>\n> Pavel, Maxim, don't you mind me splitting the patchset, or would you\n> like to do it yourself and/or maybe include more changes? I don't know\n> how actively you are working on this.\n>\nI don't mind and appreciate you joining this. If you split this we'll just\nmake the next versions based on it. Of course, there is much to do and we\nwork on this patch, including pg_upgrade test fail reported by Justin,\nwhich we haven't had time to concentrate on before. We try to do changes in\nsmall batches so I consider we can manage parallel changes. At least I read\nthis thread very often and can answer soon, even if our new versions of\npatches are not ready.\n\nAgain I consider the work you propose useful and big thanks to you,\nAlexander!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> The patch doesn't apply - I suppose the patch is relative a forked postgres\n\nNo, the authors just used a little outdated `master` branch. I\nsuccessfully applied it against 31d8d474 and then rebased to the\nlatest master (62ce0c75). The new version is attached.\nNot 100% sure if my rebase is correct since I didn't invest too much\ntime into reviewing the code. But at least it passes `make\ninstallcheck` locally. Let's see what cfbot will tell us.Thank you very much! We'll do the same rebase and check this soon. Let's use v10 now. \n> I encourage trying to break down the patch into smaller incrementally useful\n> pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> independently committable piece.\n\nCompletely agree. And the changes like:\n\n+#if 0 /* XXX remove unit tests */\n\n... suggest that the patch is pretty raw in its current state.\n\nPavel, Maxim, don't you mind me splitting the patchset, or would you\nlike to do it yourself and/or maybe include more changes? I don't know\nhow actively you are working on this.I don't mind and appreciate you joining this. If you split this we'll just make the next versions based on it. Of course, there is much to do and we work on this patch, including pg_upgrade test fail reported by Justin, which we haven't had time to concentrate on before. We try to do changes in small batches so I consider we can manage parallel changes. At least I read this thread very often and can answer soon, even if our new versions of patches are not ready.Again I consider the work you propose useful and big thanks to you, Alexander!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 3 Mar 2022 15:22:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "BTW messages with patches in this thread are always invoke manual spam\nmoderation and we need to wait for ~3 hours before the message with patch\nbecomes visible in the hackers thread. Now when I've already answered\nAlexander's letter with v10 patch the very message (and a patch) I've\nanswered is still not visible in the thread and to CFbot.\n\nCan something be done in hackers' moderation engine to make new versions\npatches become visible hassle-free?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nBTW messages with patches in this thread are always invoke manual spam moderation and we need to wait for ~3 hours before the message with patch becomes visible in the hackers thread. Now when I've already answered Alexander's letter with v10 patch the very message (and a patch) I've answered is still not visible in the thread and to CFbot. Can something be done in hackers' moderation engine to make new versions patches become visible hassle-free?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 3 Mar 2022 15:34:53 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nWe've rebased patchset onto the current master. The result is almost the\nsame as Alexander's v10 (it is a shame it is still in moderation and not\nvisible in the thread).\nAnyway, this is the v11 patch. Reviews are very welcome.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 3 Mar 2022 15:23:11 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Pavel!\n\nOn Thu, Mar 3, 2022 at 2:35 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> BTW messages with patches in this thread are always invoke manual spam moderation and we need to wait for ~3 hours before the message with patch becomes visible in the hackers thread. Now when I've already answered Alexander's letter with v10 patch the very message (and a patch) I've answered is still not visible in the thread and to CFbot.\n>\n> Can something be done in hackers' moderation engine to make new versions patches become visible hassle-free?\n\nIs your email address subscribed to the pgsql-hackers mailing list?\nAFAIK, moderation is only applied for non-subscribers.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 3 Mar 2022 16:34:16 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> > BTW messages with patches in this thread are always invoke manual spam\n> moderation and we need to wait for ~3 hours before the message with patch\n> becomes visible in the hackers thread. Now when I've already answered\n> Alexander's letter with v10 patch the very message (and a patch) I've\n> answered is still not visible in the thread and to CFbot.\n> >\n> > Can something be done in hackers' moderation engine to make new versions\n> patches become visible hassle-free?\n>\n> Is your email address subscribed to the pgsql-hackers mailing list?\n> AFAIK, moderation is only applied for non-subscribers.\n>\nHi, Alexander!\n\nYes, it is in the list. The problem is that patch is over 1Mb. So it\nstrictly goes through moderation. And this is unchanged for 2 months\nalready.\nI was advised to use .gz, which I will do next time.\n\nI've requested increasing threshold to 2 MB [1]\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CALT9ZEGbAR84q_emsf1TUMPqXT%3Dc8CxN16g-HQCxgkLzekM%2BQg%40mail.gmail.com\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> BTW messages with patches in this thread are always invoke manual spam moderation and we need to wait for ~3 hours before the message with patch becomes visible in the hackers thread. Now when I've already answered Alexander's letter with v10 patch the very message (and a patch) I've answered is still not visible in the thread and to CFbot.\n>\n> Can something be done in hackers' moderation engine to make new versions patches become visible hassle-free?\n\nIs your email address subscribed to the pgsql-hackers mailing list?\nAFAIK, moderation is only applied for non-subscribers.Hi, Alexander!Yes, it is in the list. The problem is that patch is over 1Mb. So it strictly goes through moderation. And this is unchanged for 2 months already.I was advised to use .gz, which I will do next time. I've requested increasing threshold to 2 MB [1][1] https://www.postgresql.org/message-id/flat/CALT9ZEGbAR84q_emsf1TUMPqXT%3Dc8CxN16g-HQCxgkLzekM%2BQg%40mail.gmail.com--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 3 Mar 2022 17:40:53 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Greetings,\n\n* Pavel Borisov (pashkin.elfe@gmail.com) wrote:\n> > > BTW messages with patches in this thread are always invoke manual spam\n> > moderation and we need to wait for ~3 hours before the message with patch\n> > becomes visible in the hackers thread. Now when I've already answered\n> > Alexander's letter with v10 patch the very message (and a patch) I've\n> > answered is still not visible in the thread and to CFbot.\n> > >\n> > > Can something be done in hackers' moderation engine to make new versions\n> > patches become visible hassle-free?\n> >\n> > Is your email address subscribed to the pgsql-hackers mailing list?\n> > AFAIK, moderation is only applied for non-subscribers.\n> \n> Yes, it is in the list. The problem is that patch is over 1Mb. So it\n> strictly goes through moderation. And this is unchanged for 2 months\n> already.\n\nRight, >1MB will be moderated, as will emails that are CC'd to multiple\nlists, and somehow this email thread ended up with two different\naddresses for -hackers, which isn't good.\n\n> I was advised to use .gz, which I will do next time.\n\nBetter would be to break the patch down into reasonable and independent\npieces for review and commit on separate threads as suggested previously\nand not to send huge patches to the list with the idea that someone is\ngoing to actually fully review and commit them. That's just not likely\nto end up working well anyway.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Mar 2022 12:13:51 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> We've rebased patchset onto the current master. The result is almost the\n> same as Alexander's v10 (it is a shame it is still in moderation and not\n> visible in the thread). Anyway, this is the v11 patch. Reviews are very\n> welcome.\n\nHere is a rebased and slightly modified version of the patch.\n\nI extracted the introduction of XID_FMT macro to a separate patch. Also,\nI noticed that sometimes PRIu64 was used to format XIDs instead. I changed it\nto XID_FMT for consistency. v12-0003 can be safely delivered in PG15.\n\n> I encourage trying to break down the patch into smaller incrementally useful\n> pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> independently committable piece.\n\nI'm going to address this in follow-up emails.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 8 Mar 2022 00:15:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> Here is a rebased and slightly modified version of the patch.\n>\n> I extracted the introduction of XID_FMT macro to a separate patch. Also,\n> I noticed that sometimes PRIu64 was used to format XIDs instead. I changed it\n> to XID_FMT for consistency. v12-0003 can be safely delivered in PG15.\n>\n> > I encourage trying to break down the patch into smaller incrementally useful\n> > pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> > independently committable piece.\n>\n> I'm going to address this in follow-up emails.\n\ncfbot is not happy because several files are missing in v12. Here is a\ncorrected and rebased version. I also removed the \"#undef PRIu64\"\nchange from include/c.h since previously I replaced PRIu64 usage with\nXID_FMT.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 8 Mar 2022 10:49:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> I extracted the introduction of XID_FMT macro to a separate patch. Also,\n> I noticed that sometimes PRIu64 was used to format XIDs instead. I changed it\n> to XID_FMT for consistency. v12-0003 can be safely delivered in PG15.\n\n[...]\n\n> > > I encourage trying to break down the patch into smaller incrementally useful\n> > > pieces. E.g. making all the SLRUs 64bit would be a substantial and\n> > > independently committable piece.\n> >\n> > I'm going to address this in follow-up emails.\n>\n> cfbot is not happy because several files are missing in v12. Here is a\n> corrected and rebased version. I also removed the \"#undef PRIu64\"\n> change from include/c.h since previously I replaced PRIu64 usage with\n> XID_FMT.\n\nHere is a new version of the patchset. SLRU refactoring was moved to a\nseparate patch. Both v14-0003 (XID_FMT macro) and v14-0004 (SLRU\nrefactoring) can be delivered in PG15.\n\nOne thing I couldn't understand so far is why SLRU_PAGES_PER_SEGMENT\nshould necessarily be increased in order to make 64-bit XIDs work. I\nkept the current value (32) in v14-0004 but changed it to 2048 in\n./v14-0005 (where we start using 64 bit XIDs) as it was in the\noriginal patch. Is this change really required?\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 8 Mar 2022 19:27:16 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> Here is a new version of the patchset. SLRU refactoring was moved to a\n> separate patch. Both v14-0003 (XID_FMT macro) and v14-0004 (SLRU\n> refactoring) can be delivered in PG15.\n\nHere is a new version of the patchset. The changes compared to v14 are\nminimal. Most importantly, the GCC warning reported by cfbot was\n(hopefully) fixed. The patch order was also altered, v15-0001 and\nv15-0002 are targeting PG15 now, the rest are targeting PG16.\n\nAlso for the record, I tested the patchset on Raspberry Pi 3 Model B+\nin the hope that it will discover some new flaws. To my\ndisappointment, it didn't.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 11 Mar 2022 20:26:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> > Here is a new version of the patchset. SLRU refactoring was moved to a\n> > separate patch. Both v14-0003 (XID_FMT macro) and v14-0004 (SLRU\n> > refactoring) can be delivered in PG15.\n>\n> Here is a new version of the patchset. The changes compared to v14 are\n> minimal. Most importantly, the GCC warning reported by cfbot was\n> (hopefully) fixed. The patch order was also altered, v15-0001 and\n> v15-0002 are targeting PG15 now, the rest are targeting PG16.\n>\n> Also for the record, I tested the patchset on Raspberry Pi 3 Model B+\n> in the hope that it will discover some new flaws. To my\n> disappointment, it didn't.\n\nHere is the rebased version of the patchset. Also, I updated the\ncommit messages for v16-0001 and v16-002 to make them look more like\nthe rest of the PostgreSQL commit messages. They include the link to\nthis discussion now as well.\n\nIMO v16-0001 and v16-0002 are in pretty good shape and are as much as\nwe are going to deliver in PG15. I'm going to change the status of the\nCF entry to \"Ready for Committer\" somewhere this week unless someone\nbelieves v16-0001 and/or v16-0002 shouldn't be merged.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 14 Mar 2022 13:32:04 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> IMO v16-0001 and v16-0002 are in pretty good shape and are as much as\n> we are going to deliver in PG15. I'm going to change the status of the\n> CF entry to \"Ready for Committer\" somewhere this week unless someone\n> believes v16-0001 and/or v16-0002 shouldn't be merged.\n\nSorry for the missing attachment. Here it is.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 14 Mar 2022 13:33:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi! Here is updated version of the patch, based on Alexander's ver16.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 14 Mar 2022 17:16:32 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, Hackers!\n\n> Hi! Here is updated version of the patch, based on Alexander's ver16.\n>\nI'd like to add a few quick notes on what's been done in v17.\n\nPatches 0001 and 0002 that are planned to be committed to PG15 are almost\nunchanged with the exception of one unnecessary cast in 0002 removed.\n\nWe've also addressed several issues in patch 0005 (which is planned for\nPG16):\n- The bug with frozen xids after pg_upgrade, reported by Justin [1]\n- Added proper processing of double xmax pages in\nHeapPageSetPruneXidInternal()\n- Fixed xids comparison. Initially in the patch it was changed to simple <\n<= => > for 64 bit values. Now v17 patch has returned this to the way\nsimilar to what is used in STABLE for 32-bit xids, but using modulus-64\nnumeric ring. The main goal of this change was to fix SRLU tests that\nwere mentioned\nby Alexander to have been disabled. We've fixed and enabled most of them,\nbut some of them are still need to be fixed and enabled.\n\nAlso, we've pgindent-ed all the patches.\n\nAs patches that are planned to be delivered to PG15 are almost unchanged, I\ncompletely agree with Alexander's plan to consider these patches (0001 and\n0002) as RfC.\n\nAll activity, improvement, review, etc. related to the whole patchset is\nalso very much appreciated. Big thanks to Alexander for working on the\npatch set!\n\n[1]\nhttps://www.postgresql.org/message-id/20220115063925.GS14051%40telsasoft.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, Hackers! Hi! Here is updated version of the patch, based on Alexander's ver16. I'd like to add a few quick notes on what's been done in v17.Patches 0001 and 0002 that are planned to be committed to PG15 are almost unchanged with the exception of one unnecessary cast in 0002 removed. We've also addressed several issues in patch 0005 (which is planned for PG16):- The bug with frozen xids after pg_upgrade, reported by Justin [1]- Added proper processing of double xmax pages in HeapPageSetPruneXidInternal()- Fixed xids comparison. Initially in the patch it was changed to simple < <= => > for 64 bit values. Now v17 patch has returned this to the way similar to what is used in STABLE for 32-bit xids, but using modulus-64 numeric ring. The main goal of this change was to fix SRLU tests that were mentioned by Alexander to have been disabled. We've fixed and enabled most of them, but some of them are still need to be fixed and enabled.Also, we've pgindent-ed all the patches.As patches that are planned to be delivered to PG15 are almost unchanged, I completely agree with Alexander's plan to consider these patches (0001 and 0002) as RfC.All activity, improvement, review, etc. related to the whole patchset is also very much appreciated. Big thanks to Alexander for working on the patch set![1] https://www.postgresql.org/message-id/20220115063925.GS14051%40telsasoft.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 14 Mar 2022 18:48:21 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> Hi, Hackers!\n>\n>> Hi! Here is updated version of the patch, based on Alexander's ver16.\n>>\n> I'd like to add a few quick notes on what's been done in v17.\n>\n> Patches 0001 and 0002 that are planned to be committed to PG15 are almost\n> unchanged with the exception of one unnecessary cast in 0002 removed.\n>\n> We've also addressed several issues in patch 0005 (which is planned for\n> PG16):\n> - The bug with frozen xids after pg_upgrade, reported by Justin [1]\n> - Added proper processing of double xmax pages in\n> HeapPageSetPruneXidInternal()\n> - Fixed xids comparison. Initially in the patch it was changed to simple <\n> <= => > for 64 bit values. Now v17 patch has returned this to the way\n> similar to what is used in STABLE for 32-bit xids, but using modulus-64\n> numeric ring. The main goal of this change was to fix SRLU tests that were mentioned\n> by Alexander to have been disabled. We've fixed and enabled most of them,\n> but some of them are still need to be fixed and enabled.\n>\n> Also, we've pgindent-ed all the patches.\n>\n> As patches that are planned to be delivered to PG15 are almost unchanged,\n> I completely agree with Alexander's plan to consider these patches (0001\n> and 0002) as RfC.\n>\n> All activity, improvement, review, etc. related to the whole patchset is\n> also very much appreciated. Big thanks to Alexander for working on the\n> patch set!\n>\n> [1]\n> https://www.postgresql.org/message-id/20220115063925.GS14051%40telsasoft.com\n>\nAlso, the patch v17 (0005) returns SLRU_PAGES_PER_SEGMENT to the previous\nvalue of 32.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, Hackers! Hi! Here is updated version of the patch, based on Alexander's ver16. I'd like to add a few quick notes on what's been done in v17.Patches 0001 and 0002 that are planned to be committed to PG15 are almost unchanged with the exception of one unnecessary cast in 0002 removed. We've also addressed several issues in patch 0005 (which is planned for PG16):- The bug with frozen xids after pg_upgrade, reported by Justin [1]- Added proper processing of double xmax pages in HeapPageSetPruneXidInternal()- Fixed xids comparison. Initially in the patch it was changed to simple < <= => > for 64 bit values. Now v17 patch has returned this to the way similar to what is used in STABLE for 32-bit xids, but using modulus-64 numeric ring. The main goal of this change was to fix SRLU tests that were mentioned by Alexander to have been disabled. We've fixed and enabled most of them, but some of them are still need to be fixed and enabled.Also, we've pgindent-ed all the patches.As patches that are planned to be delivered to PG15 are almost unchanged, I completely agree with Alexander's plan to consider these patches (0001 and 0002) as RfC.All activity, improvement, review, etc. related to the whole patchset is also very much appreciated. Big thanks to Alexander for working on the patch set![1] https://www.postgresql.org/message-id/20220115063925.GS14051%40telsasoft.comAlso, the patch v17 (0005) returns SLRU_PAGES_PER_SEGMENT to the previous value of 32.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 14 Mar 2022 19:43:40 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "At Mon, 14 Mar 2022 19:43:40 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> > I'd like to add a few quick notes on what's been done in v17.\n\nI have some commens by a quick look-through. Apologize in advance for\nwrong comments from the lack of the knowledge of the whole patch-set.\n\n> > Patches 0001 and 0002 that are planned to be committed to PG15 are almost\n> > unchanged with the exception of one unnecessary cast in 0002 removed.\n\n0001:\n\n The XID_FMT has quite bad impact on the translatability of error\n messages. 3286065651 has removed INT64_FORMAT from translatable\n texts for the reason. This re-introduces that in several places.\n 0001 itself does not harm but 0005 replaces XID_FMT with\n INT64_FORMAT. Other patches have the same issue, too.\n\n> > We've also addressed several issues in patch 0005 (which is planned for\n> > PG16):\n> > - The bug with frozen xids after pg_upgrade, reported by Justin [1]\n> > - Added proper processing of double xmax pages in\n> > HeapPageSetPruneXidInternal()\n> > - Fixed xids comparison. Initially in the patch it was changed to simple <\n> > <= => > for 64 bit values. Now v17 patch has returned this to the way\n> > similar to what is used in STABLE for 32-bit xids, but using modulus-64\n> > numeric ring. The main goal of this change was to fix SRLU tests that were mentioned\n\nIf IIUC, the following part in 0002 doesn't consider wraparound.\n\n-asyncQueuePagePrecedes(int p, int q)\n+asyncQueuePagePrecedes(int64 p, int64 q)\n {\n-\treturn asyncQueuePageDiff(p, q) < 0;\n+\treturn p < q;\n }\n\n> > by Alexander to have been disabled. We've fixed and enabled most of them,\n> > but some of them are still need to be fixed and enabled.\n> >\n> > Also, we've pgindent-ed all the patches.\n\n0005 has \"new blank line at EOF\".\n\n> > As patches that are planned to be delivered to PG15 are almost unchanged,\n> > I completely agree with Alexander's plan to consider these patches (0001\n> > and 0002) as RfC.\n> >\n> > All activity, improvement, review, etc. related to the whole patchset is\n> > also very much appreciated. Big thanks to Alexander for working on the\n> > patch set!\n> >\n> > [1]\n> > https://www.postgresql.org/message-id/20220115063925.GS14051%40telsasoft.com\n> >\n> Also, the patch v17 (0005) returns SLRU_PAGES_PER_SEGMENT to the previous\n> value of 32.\n\n0002 re-introduces pg_strtouint64, which have been recently removed by\n3c6f8c011f.\n> Simplify the general-purpose 64-bit integer parsing APIs\n> \n> pg_strtouint64() is a wrapper around strtoull/strtoul/_strtoui64, but\n> it seems no longer necessary to have this indirection.\n..\n> type definition int64/uint64. For that, add new macros strtoi64() and\n> strtou64() in c.h as thin wrappers around strtol()/strtoul() or\n> strtoll()/stroull(). This makes these functions available everywhere\n\nregards.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Mar 2022 11:02:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Kyotaro!\n\n0001:\n>\n> The XID_FMT has quite bad impact on the translatability of error\n> messages. 3286065651 has removed INT64_FORMAT from translatable\n> texts for the reason. This re-introduces that in several places.\n> 0001 itself does not harm but 0005 replaces XID_FMT with\n> INT64_FORMAT. Other patches have the same issue, too.\n>\n I do understand your concern and I wonder how I can do this better? My\nfirst intention was to replace XID_FMT with %llu and INT64_FORMAT with\n%lld. This should solve the translatability issue, but I'm not sure about\nportability of this. Should this work on Windows, etc? Can you advise me on\nthe best solution?\n\nWe've fixed all the other things mentioned. Thanks!\n\nAlso added two fixes:\n- CF bot was unhappy with pg_upgrade test in v17 because I forgot to add a\nfix for computation of relminmxid during vacuum on a fresh database.\n- Replace frozen or invalid x_min with FrozenTransactionId or\nInvalidTransactionId respectively during tuple conversion to 64xid.\n\nReviews are welcome as always! Thanks!\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 15 Mar 2022 18:48:34 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 14, 2022 at 01:32:04PM +0300, Aleksander Alekseev wrote:\n> IMO v16-0001 and v16-0002 are in pretty good shape and are as much as\n> we are going to deliver in PG15. I'm going to change the status of the\n> CF entry to \"Ready for Committer\" somewhere this week unless someone\n> believes v16-0001 and/or v16-0002 shouldn't be merged.\n\nNot sure, but if you want more people to look at them, probably best\nwould be to start a new thread with just the v15-target patches. Right\nnow, one has to download your tarball, extract it and look at the\npatches in there.\n\nI hope v16-0001 and v16-0002 are small enough (I didn't do the above)\nthat they can just be attached normally?\n\n\nMichael\n\n-- \nMichael Banck\nTeam Lead PostgreSQL\nProject Manager\nTel.: +49 2166 9901-171\nMail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nManagement: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nOur handling of personal data is subject to:\nhttps://www.credativ.de/en/contact/privacy/\n\n\n", "msg_date": "Tue, 15 Mar 2022 17:15:31 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "At Tue, 15 Mar 2022 18:48:34 +0300, Maxim Orlov <orlovmg@gmail.com> wrote in \n> Hi Kyotaro!\n> \n> 0001:\n> >\n> > The XID_FMT has quite bad impact on the translatability of error\n> > messages. 3286065651 has removed INT64_FORMAT from translatable\n> > texts for the reason. This re-introduces that in several places.\n> > 0001 itself does not harm but 0005 replaces XID_FMT with\n> > INT64_FORMAT. Other patches have the same issue, too.\n> >\n> I do understand your concern and I wonder how I can do this better? My\n> first intention was to replace XID_FMT with %llu and INT64_FORMAT with\n> %lld. This should solve the translatability issue, but I'm not sure about\n> portability of this. Should this work on Windows, etc? Can you advise me on\n> the best solution?\n\nDoesn't doing \"errmsg(\"blah blah %lld ..\", (long long) xid)\" work?\n\n> We've fixed all the other things mentioned. Thanks!\n> \n> Also added two fixes:\n> - CF bot was unhappy with pg_upgrade test in v17 because I forgot to add a\n> fix for computation of relminmxid during vacuum on a fresh database.\n> - Replace frozen or invalid x_min with FrozenTransactionId or\n> InvalidTransactionId respectively during tuple conversion to 64xid.\n> \n> Reviews are welcome as always! Thanks!\n\nMy pleasure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Mar 2022 12:08:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nShame on me, in v18 I forgot to add a fix for pg_verifybackup. Here is a\nnew version. I think this version will make CF bot happy.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 16 Mar 2022 11:53:33 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\nI forked this thread as requested by several people in the discussion [1].\n\nThe new thread contains two patches that are targeting PG15. I replaced the\nthread in the current CF to [1]. This thread was added to the next CF. I\nsuggest we continue discussing changes targeting PG >= 16 here.\n\n[1]:\nhttps://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi hackers,I forked this thread as requested by several people in the discussion [1].The new thread contains two patches that are targeting PG15. I replaced the thread in the current CF to [1]. This thread was added to the next CF. I suggest we continue discussing changes targeting PG >= 16 here.[1]: https://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.com-- Best regards,Aleksander Alekseev", "msg_date": "Thu, 17 Mar 2022 16:20:32 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "> I forked this thread as requested by several people in the discussion [1].\n>\n> The new thread contains two patches that are targeting PG15. I replaced\n> the thread in the current CF to [1]. This thread was added to the next CF.\n> I suggest we continue discussing changes targeting PG >= 16 here.\n>\n> [1]:\n> https://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.com\n>\nThanks!\nWe're planning to add 0001 and 0002 with next v20 to the mentioned [1]\nthread to deliver them into v15\nIn this thread we'll add full patchset v20 as 0003, 0004 and 0005 are not\nsupposed to be committed without 0001 and 0002 anyway.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI forked this thread as requested by several people in the discussion [1].The new thread contains two patches that are targeting PG15. I replaced the thread in the current CF to [1]. This thread was added to the next CF. I suggest we continue discussing changes targeting PG >= 16 here.[1]: https://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.comThanks!We're planning to add 0001 and 0002 with next v20 to the mentioned [1] thread to deliver them into v15In this thread we'll add full patchset v20 as 0003, 0004 and 0005 are not supposed to be committed without 0001 and 0002 anyway.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 17 Mar 2022 18:54:31 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nWe've revised the whole patch set.\nThings changed:\n - use localizable printf format, compatible with 32 and 64 bit xids\n - replace str2unt64 and similar functions with strtou64 call\n - rebase onto current master branch\n - use proper type modifiers for sscanf calls\n\nWhat about adding 0003 patch into [1] to deliver it into PG15. I think it's\nquite possible.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 17 Mar 2022 19:48:53 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is v22 with following changes:\n- use explicit unsigned long long cast for printf/elog XIDs instead of\nmacro XID_TYPE\n- add *.po localization\n- fix forgotten XIDs format changes in pg_resetwal.c\n- 0006 patch refactoring\n\nYour reviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 18 Mar 2022 18:22:00 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, hackers!\n\nWhile working on the patchset I've noticed that FullTransactionId lost its\nsemantics in its scope. TransactionId is supposed to be 64bit (default) and\nepoch approach becomes outdated. What do you think of fully removing\nFullTransactionId and its support functions and macro?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, hackers!While working on the patchset I've noticed that FullTransactionId lost its semantics in its scope. TransactionId is supposed to be 64bit (default) and epoch approach becomes outdated. What do you think of fully removing FullTransactionId and its support functions and macro?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 18 Mar 2022 20:08:14 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn 2022-03-18 18:22:00 +0300, Maxim Orlov wrote:\n> Here is v22 with following changes:\n> - use explicit unsigned long long cast for printf/elog XIDs instead of\n> macro XID_TYPE\n> - add *.po localization\n> - fix forgotten XIDs format changes in pg_resetwal.c\n> - 0006 patch refactoring\n\nFWIW, 0006 still does way way too many things at once, even with 0001-0003\nsplit out. I don't really see a point in reviewing / posting new versions\nuntil that's done.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Mar 2022 16:26:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, Mar 11, 2022 at 08:26:26PM +0300, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n> > Here is a new version of the patchset. SLRU refactoring was moved to a\n> > separate patch. Both v14-0003 (XID_FMT macro) and v14-0004 (SLRU\n> > refactoring) can be delivered in PG15.\n> \n> Here is a new version of the patchset. The changes compared to v14 are\n> minimal. Most importantly, the GCC warning reported by cfbot was\n> (hopefully) fixed. The patch order was also altered, v15-0001 and\n> v15-0002 are targeting PG15 now, the rest are targeting PG16.\n\nDo you know that you can test a branch on cirrus without using CF bot or\nmailing the patch to the list ? See src/tools/ci/README\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 19 Mar 2022 12:40:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n>\n> Do you know that you can test a branch on cirrus without using CF bot or\n> mailing the patch to the list ? See src/tools/ci/README\n>\n\nYes, sure! The main reason to post updates of this patchset is for hackers\nthat are interested in the progress have relevant version with updates.\nThis patch is not for cfbot (I suppose it even don't trigger cfbot as it is\nattached to the next CF.\n\nPavel.\n\n>\n\n\nDo you know that you can test a branch on cirrus without using CF bot or\nmailing the patch to the list ?  See src/tools/ci/READMEYes, sure! The main reason to post updates of this patchset is for hackers that are interested in the progress have relevant version with updates. This patch is not for cfbot (I suppose it even don't trigger cfbot as it is attached to the next CF.Pavel.", "msg_date": "Sat, 19 Mar 2022 22:08:56 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nJust to make this thread up to date, here is patch v30 relevant to the\nchanges in [1].\nI've been running make check-world for about 2 days on Ubuntu 64 bit and\nDebian 32 bit and I don't have any failures.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CALT9ZEHLGm6dUd2SdpJxDd5t9wLYXDm6OGFgZS_8jDzFSnLvvQ%40mail.gmail.com#bedc9eb11b1b50402729b55e557aec78\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 28 Mar 2022 11:15:53 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a rebased version of the path v31.\n\nThings changed:\n- refactoring lazy page conversion after upgrade from prev versions\n- refactoring slru segment resize\n- remove U64FromFullTransactionId and FullTransactionIdFromU64\n- compatibility changes with recent improvements in vacuum\n- move 64-bit XID related tests from test.sh into 002_pg_upgrade.pl\n\n>\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 5 Apr 2022 13:59:42 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a rebased version of the path v32.\n\nThings changed:\n- revert unneeded 64-bit xid related changes on sequence pages\n- fix in convert_heap\n- fix new page init when full_page_write=off\n- added test.sh changes into new 002_pg_upgrade.pl\n- and massive refactoring of page conversions after pu_upgrade\n- made xid type alignment compatible with AIX double alignment\n\nReview are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 13 Apr 2022 13:48:52 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is an updated version of patch.\nMajor changes are:\n- single out options to initialize cluster with given xid/mxid/mxoff into\nseparate patch 0004 with a purpose of review and apply it separately before\nthe main patch.\n We also created a separate CF entry to handle this [1].\n- add unit tests for lazy page conversion from 32 to 64 bits xid format\n(inside patch 0008).\n- make logical replication of xid format to be 64 bit and add test (inside\npatch 0008).\n- remove unnecessary padding to compactify XLogRecord\n- 32 to 64 bit page lazy conversion refactoring\n- rebase to recent upstream branch\n\nPatches 0001-0003 are identical to the v33 from Aleksander Alekseev in\nthread [2].\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG=ezaa4vqYjJ16yoxgrpa-=gXnf0Vv3Ey9bjGrRRFN2YyWFQ@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n\nReviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 13 May 2022 16:11:08 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a new rebased version of the patchset plus minor changes:\n- return initial cluster xid to be FirstNormalTransactionId\n- remove unnecessary truncation in TruncateSUBTRANS\n\nOnly 0008 patch is changed.\n\nReviews are very welcome!\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 20 May 2022 17:38:03 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a rebased and improved version of the patchset: now we use 64 bit\natomic operations on shared memory which was not previously warrantied on\n32 bit architectures.\n\nBefore this change under heavy transaction concurrency we've got a warning\n\"xmin is far in the past\", rarely. This was seen on 32 bit architectures.\n\nOn 64 bit these changes do not change performance since 64 bit atomicity is\nautomatically fulfilled.\n\nOnly 0008 patch is changed.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 27 May 2022 16:48:48 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, hackers!\n\nI've updated a patchset for 64-xid (actually only 0008 patch is changed).\n\nThe update addresses a corner case of not completing VACUUM FULL after\npg_upgrade from the cluster containing a maximum size tuple in plain\nstorage. Page with such tuples can not be converted to 64-xid format as\nthere is no room for HeapPageSpecial, so it remains in DoubleXmax format\nand this can not be changed until that tuple version is deleted. The change\nmakes VACUUM FULL copy these pages instead of throwing an error.\n\nThe patchset is also rebased onto a current master branch.\n\nYour discussion and thoughts are very much welcome!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Tue, 7 Jun 2022 18:15:30 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, hackers!\nThe patch stopped applying due to upstream changes.\nI've rebased it again. PFA v38.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Thu, 9 Jun 2022 15:45:55 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "This seems to be causing cfbot/cirrusci to time out.\n\nHere's the build history\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3594\n\nhttps://cirrus-ci.com/task/4809278652416000 4 weeks ago on macos\nhttps://cirrus-ci.com/task/5559884417597440 2 weeks ago on macos\nhttps://cirrus-ci.com/task/6629554545491968 2 weeks ago on macos\nhttps://cirrus-ci.com/task/5253255562264576 this week on freebsd\n\nIt seems like there's a larger discussion to be had about the architecture\nabout the patch, but I thought I'd point out this detail.\n\n\n", "msg_date": "Thu, 30 Jun 2022 21:53:35 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> This seems to be causing cfbot/cirrusci to time out.\n>\n> Here's the build history\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3594\n>\n> https://cirrus-ci.com/task/4809278652416000 4 weeks ago on macos\n> https://cirrus-ci.com/task/5559884417597440 2 weeks ago on macos\n> https://cirrus-ci.com/task/6629554545491968 2 weeks ago on macos\n> https://cirrus-ci.com/task/5253255562264576 this week on freebsd\n>\n> It seems like there's a larger discussion to be had about the architecture\n> about the patch, but I thought I'd point out this detail.\n>\nThanks! Will check it out.\n\nPavel\n\nThis seems to be causing cfbot/cirrusci to time out.\n\nHere's the build history\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3594\n\nhttps://cirrus-ci.com/task/4809278652416000 4 weeks ago on macos\nhttps://cirrus-ci.com/task/5559884417597440 2 weeks ago on macos\nhttps://cirrus-ci.com/task/6629554545491968 2 weeks ago on macos\nhttps://cirrus-ci.com/task/5253255562264576 this week on freebsd\n\nIt seems like there's a larger discussion to be had about the architecture\nabout the patch, but I thought I'd point out this detail.Thanks! Will check it out.Pavel", "msg_date": "Fri, 1 Jul 2022 09:28:27 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a new version of the patchset with following changes:\n- change unit test of page_conversion to address rare cfbot fails (the\nreason of cfbot to time out was the unit test not written accurate enough);\n- fix pg_upgarde on 32-bit systems;\n- switch to XidList type (introduced by f10a025cfe97c1a34) for logical\nreplication, abandoning previously used Int64List type;\n- use cat version instead of major version as a boundary from 32 to 64 bit\nxids in pg_upgrade;\n- this cat version boundary temporary set to 999999999 for pg_upgrade\ntesting purpose;\n- also rebased to the actual master branch.\n\nOn 32-bit arch we have noticed pg_upgrade from 32 to 64 bit xids fails due\nto different TOAST_MAX_CHUNK_SIZE. On a 64-bit xid page we have less\navailable space due to adding a heap page special. This leads to recalc of\nTOAST_MAX_CHUNK_SIZE.\n\nThis was not a problem on 64 bit architectures, as padding bytes on 32-bit\nxids on TOAST pages were enough to accommodate heap page special of 64-bit\nxids with TOAST_MAX_CHUNK_SIZE unchanged. On 32 bits architectures padding\nbytes were not enough and this needed TOAST_MAX_CHUNK_SIZE to be of\ndifferent size (on 64-bit xids version).\n\nChanges of TOAST_MAX_CHUNK_SIZE lead to being unable to pg_upgrade onto\n64-bit xids. This was a real problem, since TOAST of relation requires all\nchunks to be the same size. In other words, we can not mix TOAST chunks of\nprevious (32-bit xid TOAST pages) with the new one with 64-bit xid TOAST\npages.\n\nThe solution was to use different specials for TOAST and heap pages. Since,\nTOAST tuples can not have multixacts and does not need pd_multi_base on\npage.\n\nThus, v39 is improved relative to v38 a lot.\n\nAs always, feel free to review and share your thoughts on a subject.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 6 Jul 2022 15:55:02 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\n>\nThe patchset stops applying, so here is a rebased version.\n\nAlso we've forgotten including atomic shared memory xid access from patch\nv36 to v39.\nSo, add these changes here in v40.\n\nAs always, reviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 8 Jul 2022 17:36:07 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nDue to changes in upstream patchset stop applying.\nHere is rebased version with minor refactoring in contrib/pageinspect.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 11 Jul 2022 16:19:39 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nOnce again, due to changes in upstream patchset stop applying.\nHere is a rebased version with minor refactoring in sequence.c.\nAlso remove unnecessary refactoring of pg_upgrade/file.c and return it to\nupstream state.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 13 Jul 2022 17:46:25 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, hackers!\nv42 stopped applying, so we rebased it to v43. Attached is a GitHub link,\nbut I see Cfbot hasn't become green. Apparently, it hasn't seen changes in\nGitHub link relative to v42 attached as a patch.\n\n-- \nBest regards,\nPavel Borisov\n\nHi, hackers!v42 stopped applying, so we rebased it to v43. Attached is a GitHub link, but I see Cfbot hasn't become green. Apparently, it hasn't seen changes in GitHub link relative to v42 attached as a patch.-- Best regards,Pavel Borisov", "msg_date": "Fri, 15 Jul 2022 15:20:58 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, 15 Jul 2022 at 15:20, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Hi, hackers!\n> v42 stopped applying, so we rebased it to v43. Attached is a GitHub link,\n> but I see Cfbot hasn't become green. Apparently, it hasn't seen changes in\n> GitHub link relative to v42 attached as a patch.\n>\n\nGithub link is as follows:\nhttps://github.com/ziva777/postgres/tree/64xid-cf\nMaybe this will enable CFbot to see it.\n\n-- \nBest regards,\nPavel Borisov\n\nOn Fri, 15 Jul 2022 at 15:20, Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, hackers!v42 stopped applying, so we rebased it to v43. Attached is a GitHub link, but I see Cfbot hasn't become green. Apparently, it hasn't seen changes in GitHub link relative to v42 attached as a patch.Github link is as follows:https://github.com/ziva777/postgres/tree/64xid-cfMaybe this will enable CFbot to see it.-- Best regards,Pavel Borisov", "msg_date": "Fri, 15 Jul 2022 15:23:29 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, Jul 15, 2022 at 03:23:29PM +0400, Pavel Borisov wrote:\n> On Fri, 15 Jul 2022 at 15:20, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Hi, hackers!\n> > v42 stopped applying, so we rebased it to v43. Attached is a GitHub link,\n> > but I see Cfbot hasn't become green. Apparently, it hasn't seen changes in\n> > GitHub link relative to v42 attached as a patch.\n> \n> Github link is as follows:\n> https://github.com/ziva777/postgres/tree/64xid-cf\n> Maybe this will enable CFbot to see it.\n\nMy suggestion was a bit ambiguous, sorry for the confusion.\n\nThe \"Git link\" is just an annotation - cfbot doesn't try to do anything with\nit[0] (but cirrusci will run the same checks under your github account).\nWhat I meant was that it doesn't seem imporant to send rebases multiple times\nper week if there's been no other changes. Anyone is still able to review the\npatch. It's possible to build it by applying the patch to a checkout of the\npostgres tree at the time the patch was sent. Or by using the git link. Or by\nchecking out cfbot's branch here, from the last time it *did* apply.\nhttps://github.com/postgresql-cfbot/postgresql/tree/commitfest/38/3594\n\n[0] (I think cfbot *could* try to read the git link, and apply patches that it\nfinds there, but that's an idea that hasn't been proposed or discussed. It'd\nneed to know which branch to use, and it'd need to know when to use the git\nlink and when to use the most-recent email attachments).\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jul 2022 07:16:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, 15 Jul 2022 at 16:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Jul 15, 2022 at 03:23:29PM +0400, Pavel Borisov wrote:\n> > On Fri, 15 Jul 2022 at 15:20, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> > > Hi, hackers!\n> > > v42 stopped applying, so we rebased it to v43. Attached is a GitHub\n> link,\n> > > but I see Cfbot hasn't become green. Apparently, it hasn't seen\n> changes in\n> > > GitHub link relative to v42 attached as a patch.\n> >\n> > Github link is as follows:\n> > https://github.com/ziva777/postgres/tree/64xid-cf\n> > Maybe this will enable CFbot to see it.\n>\n> My suggestion was a bit ambiguous, sorry for the confusion.\n>\n> The \"Git link\" is just an annotation - cfbot doesn't try to do anything\n> with\n> it[0] (but cirrusci will run the same checks under your github account).\n> What I meant was that it doesn't seem imporant to send rebases multiple\n> times\n> per week if there's been no other changes. Anyone is still able to review\n> the\n> patch. It's possible to build it by applying the patch to a checkout of\n> the\n> postgres tree at the time the patch was sent. Or by using the git link.\n> Or by\n> checking out cfbot's branch here, from the last time it *did* apply.\n> https://github.com/postgresql-cfbot/postgresql/tree/commitfest/38/3594\n>\n> [0] (I think cfbot *could* try to read the git link, and apply patches\n> that it\n> finds there, but that's an idea that hasn't been proposed or discussed.\n> It'd\n> need to know which branch to use, and it'd need to know when to use the git\n> link and when to use the most-recent email attachments).\n>\n\nHi, Justin!\n\nI can agree with you that sending rebased patches too often can be a little\nannoying. On the other hand, otherwise, it's just red in Cfbot. I suppose\nit's much easier and more comfortable to review the patches that at least\napply cleanly and pass all tests. So if Cfbot is red for a long time I feel\nwe need to send a rebased patchset anyway.\n\nI'll try to not doing this too often but frankly, I don't see a better\nalternative at the moment.\n\nAnyway, big thanks for your advice and attention to this thread!\n\n-- \nBest regards,\nPavel Borisov\n\nOn Fri, 15 Jul 2022 at 16:17, Justin Pryzby <pryzby@telsasoft.com> wrote:On Fri, Jul 15, 2022 at 03:23:29PM +0400, Pavel Borisov wrote:\n> On Fri, 15 Jul 2022 at 15:20, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Hi, hackers!\n> > v42 stopped applying, so we rebased it to v43. Attached is a GitHub link,\n> > but I see Cfbot hasn't become green. Apparently, it hasn't seen changes in\n> > GitHub link relative to v42 attached as a patch.\n> \n> Github link is as follows:\n> https://github.com/ziva777/postgres/tree/64xid-cf\n> Maybe this will enable CFbot to see it.\n\nMy suggestion was a bit ambiguous, sorry for the confusion.\n\nThe \"Git link\" is just an annotation - cfbot doesn't try to do anything with\nit[0] (but cirrusci will run the same checks under your github account).\nWhat I meant was that it doesn't seem imporant to send rebases multiple times\nper week if there's been no other changes.  Anyone is still able to review the\npatch.  It's possible to build it by applying the patch to a checkout of the\npostgres tree at the time the patch was sent.  Or by using the git link.  Or by\nchecking out cfbot's branch here, from the last time it *did* apply.\nhttps://github.com/postgresql-cfbot/postgresql/tree/commitfest/38/3594\n\n[0] (I think cfbot *could* try to read the git link, and apply patches that it\nfinds there, but that's an idea that hasn't been proposed or discussed.  It'd\nneed to know which branch to use, and it'd need to know when to use the git\nlink and when to use the most-recent email attachments).Hi, Justin!I can agree with you that sending rebased patches too often can be a little annoying. On the other hand, otherwise, it's just red in Cfbot. I suppose it's much easier and more comfortable to review the patches that at least apply cleanly and pass all tests. So if Cfbot is red for a long time I feel we need to send a rebased patchset anyway. I'll try to not doing this too often but frankly, I don't see a better alternative at the moment.Anyway, big thanks for your advice and attention to this thread!-- Best regards,Pavel Borisov", "msg_date": "Fri, 15 Jul 2022 16:31:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> I can agree with you that sending rebased patches too often can be a little annoying. On the other hand, otherwise, it's just red in Cfbot. I suppose it's much easier and more comfortable to review the patches that at least apply cleanly and pass all tests. So if Cfbot is red for a long time I feel we need to send a rebased patchset anyway.\n>\n> I'll try to not doing this too often but frankly, I don't see a better alternative at the moment.\n\nConsidering the overall activity on the mailing list personally I\ndon't see a problem here. Several extra emails don't bother me at all,\nbut I would like to see a green cfbot report for an open item in the\nCF application. Otherwise someone will complain that the patch doesn't\napply anymore and the result will be the same as for sending an\nupdated patch, except that we will receive at least two emails instead\nof one.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 15 Jul 2022 17:36:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> > I can agree with you that sending rebased patches too often can be a\n> little annoying. On the other hand, otherwise, it's just red in Cfbot. I\n> suppose it's much easier and more comfortable to review the patches that at\n> least apply cleanly and pass all tests. So if Cfbot is red for a long time\n> I feel we need to send a rebased patchset anyway.\n> >\n> > I'll try to not doing this too often but frankly, I don't see a better\n> alternative at the moment.\n>\n> Considering the overall activity on the mailing list personally I\n> don't see a problem here. Several extra emails don't bother me at all,\n> but I would like to see a green cfbot report for an open item in the\n> CF application. Otherwise someone will complain that the patch doesn't\n> apply anymore and the result will be the same as for sending an\n> updated patch, except that we will receive at least two emails instead\n> of one.\n>\nHi, Alexander!\nAgree with you. I also consider green cfbot entry important. So PFA rebased\nv43.\n\n-- \nBest regards,\nPavel Borisov", "msg_date": "Mon, 18 Jul 2022 13:23:42 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Jan 05, 2022 at 06:12:26PM -0600, Justin Pryzby wrote:\n> On Wed, Jan 05, 2022 at 06:51:37PM -0500, Bruce Momjian wrote:\n> > On Tue, Jan 4, 2022 at 10:22:50PM +0000, Finnerty, Jim wrote:\n> > > I'm concerned about the maintainability impact of having 2 new\n> > > on-disk page formats. It's already complex enough with XIDs and\n> > > multixact-XIDs.\n> > >\n> > > If the lack of space for the two epochs in the special data area is\n> > > a problem only in an upgrade scenario, why not resolve the problem\n> > > before completing the upgrade process like a kind of post-process\n> > > pg_repack operation that converts all \"double xmax\" pages to\n> > > the \"double-epoch\" page format? i.e. maybe the \"double xmax\"\n> > > representation is needed as an intermediate representation during\n> > > upgrade, but after upgrade completes successfully there are no pages\n> > > with the \"double-xmax\" representation. This would eliminate a whole\n> > > class of coding errors and would make the code dealing with 64-bit\n> > > XIDs simpler and more maintainable.\n> > \n> > Well, yes, we could do this, and it would avoid the complexity of having\n> > to support two XID representations, but we would need to accept that\n> > fast pg_upgrade would be impossible in such cases, since every page\n> > would need to be checked and potentially updated.\n> > \n> > You might try to do this while the server is first started and running\n> > queries, but I think we found out from the online checkpoint patch that\n> \n> I think you meant the online checksum patch. Which this reminded me of, too.\n\nI wondered whether anyone had considered using relation forks to maintain state\nof these long, transitional processes.\n\nEither a whole new fork, or additional bits in the visibility map, which has\npage-level bits.\n\nThere'd still need to be a flag somewhere indicating whether\nchecksums/xid64s/etc were enabled cluster-wide. The VM/fork bits would need to\nbe checked while the cluster was being re-processed online. This would add\nsome overhead. After the cluster had reached its target state, the flag could\nbe set, and the VM bits would no longer need to be checked.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 26 Aug 2022 15:04:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Jul 18, 2022 at 2:54 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>> > I can agree with you that sending rebased patches too often can be a little annoying. On the other hand, otherwise, it's just red in Cfbot. I suppose it's much easier and more comfortable to review the patches that at least apply cleanly and pass all tests. So if Cfbot is red for a long time I feel we need to send a rebased patchset anyway.\n>> >\n>> > I'll try to not doing this too often but frankly, I don't see a better alternative at the moment.\n>>\n>> Considering the overall activity on the mailing list personally I\n>> don't see a problem here. Several extra emails don't bother me at all,\n>> but I would like to see a green cfbot report for an open item in the\n>> CF application. Otherwise someone will complain that the patch doesn't\n>> apply anymore and the result will be the same as for sending an\n>> updated patch, except that we will receive at least two emails instead\n>> of one.\n>\n> Hi, Alexander!\n> Agree with you. I also consider green cfbot entry important. So PFA rebased v43.\n\nSince we have converted TransactionId to 64-bit, so do we still need\nthe concept of FullTransactionId? I mean it is really confusing to\nhave 3 forms of transaction ids. i.e. Transaction Id,\nFullTransactionId and ShortTransactionId.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Sep 2022 09:53:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Sun, Sep 4, 2022 at 9:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jul 18, 2022 at 2:54 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> >>\n> >> > I can agree with you that sending rebased patches too often can be a little annoying. On the other hand, otherwise, it's just red in Cfbot. I suppose it's much easier and more comfortable to review the patches that at least apply cleanly and pass all tests. So if Cfbot is red for a long time I feel we need to send a rebased patchset anyway.\n> >> >\n> >> > I'll try to not doing this too often but frankly, I don't see a better alternative at the moment.\n> >>\n> >> Considering the overall activity on the mailing list personally I\n> >> don't see a problem here. Several extra emails don't bother me at all,\n> >> but I would like to see a green cfbot report for an open item in the\n> >> CF application. Otherwise someone will complain that the patch doesn't\n> >> apply anymore and the result will be the same as for sending an\n> >> updated patch, except that we will receive at least two emails instead\n> >> of one.\n> >\n> > Hi, Alexander!\n> > Agree with you. I also consider green cfbot entry important. So PFA rebased v43.\n>\n> Since we have converted TransactionId to 64-bit, so do we still need\n> the concept of FullTransactionId? I mean it is really confusing to\n> have 3 forms of transaction ids. i.e. Transaction Id,\n> FullTransactionId and ShortTransactionId.\n\nI have done some more reviews to understand the idea. I think this\npatch needs far more comments to make it completely readable.\n\n1.\n typedef struct HeapTupleData\n {\n+ TransactionId t_xmin; /* base value for normal transaction ids */\n+ TransactionId t_xmax; /* base value for mutlixact */\n\nI think the field name and comments are not in sync, field says xmin\nand xmax whereas the comment says base value for\ntransaction id and multi-xact.\n\n2.\nextern bool heap_page_prepare_for_xid(Relation relation, Buffer buffer,\n TransactionId xid, bool multi);\n\nI noticed that this function is returning bool but all the callers are\nignoring the return type.\n\n3.\n+static int\n+heap_page_try_prepare_for_xid(Relation relation, Buffer buffer, Page page,\n+ TransactionId xid, bool multi, bool is_toast)\n+{\n+ TransactionId base;\n+ ShortTransactionId min = InvalidTransactionId,\n\nadd function header comments.\n\n4.\n\n+ if (!multi)\n+ {\n+ Assert(!is_toast || !(htup->t_infomask & HEAP_XMAX_IS_MULTI));\n+\n+ if (TransactionIdIsNormal(htup->t_choice.t_heap.t_xmin) &&\n+ !HeapTupleHeaderXminFrozen(htup))\n+ {\n+ xid_min_max(min, max, htup->t_choice.t_heap.t_xmin, &found);\n+ }\n+\n+ if (htup->t_infomask & HEAP_XMAX_INVALID)\n+ continue;\n+\n+ if ((htup->t_infomask & HEAP_XMAX_IS_MULTI) &&\n+ (!(htup->t_infomask & HEAP_XMAX_LOCK_ONLY)))\n+ {\n+ TransactionId update_xid;\n+ ShortTransactionId xid;\n+\n+ Assert(!is_toast);\n+ update_xid = MultiXactIdGetUpdateXid(HeapTupleHeaderGetRawXmax(page, htup),\n+ htup->t_infomask);\n+ xid = NormalTransactionIdToShort(HeapPageGetSpecial(page)->pd_xid_base,\n+ update_xid);\n+\n+ xid_min_max(min, max, xid, &found);\n+ }\n+ }\n\nWhy no handling for multi? And this function has absolutely no\ncomments to understand the reason for this.\n\n5.\n+ if (IsToastRelation(relation))\n+ {\n+ PageInit(page, BufferGetPageSize(buffer), sizeof(ToastPageSpecialData));\n+ ToastPageGetSpecial(page)->pd_xid_base = RecentXmin -\nFirstNormalTransactionId;\n+ }\n+ else\n+ {\n+ PageInit(page, BufferGetPageSize(buffer), sizeof(HeapPageSpecialData));\n+ HeapPageGetSpecial(page)->pd_xid_base = RecentXmin - FirstNormalTransactionId;\n+ }\n\nWhy pd_xid_base can not be just RecentXmin? Please explain in the\ncomments above.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Sep 2022 15:50:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Thank you for your review.\n\nSince we have converted TransactionId to 64-bit, so do we still need\n> the concept of FullTransactionId? I mean it is really confusing to\n> have 3 forms of transaction ids. i.e. Transaction Id,\n> FullTransactionId and ShortTransactionId.\n>\nYeah, I totally agree with you. Actually, it is better to get rid of them,\nif this patch set will be committed.\nWe've already tried to do some experiments on this issue. But,\nunfortunately, this resulted in bloating\nthe patch set. So, we decided to address this in the future.\n\n1.\n> typedef struct HeapTupleData\n> {\n> + TransactionId t_xmin; /* base value for normal transaction ids */\n> + TransactionId t_xmax; /* base value for mutlixact */\n>\n> I think the field name and comments are not in sync, field says xmin\n> and xmax whereas the comment says base value for\n> transaction id and multi-xact.\n>\nFixed.\n\n\n> 2.\n> extern bool heap_page_prepare_for_xid(Relation relation, Buffer buffer,\n> TransactionId xid, bool multi);\n>\n> I noticed that this function is returning bool but all the callers are\n> ignoring the return type.\n>\nFixed.\n\n\n> 3.\n> +static int\n> +heap_page_try_prepare_for_xid(Relation relation, Buffer buffer, Page page,\n> + TransactionId xid, bool multi, bool is_toast)\n> +{\n> + TransactionId base;\n> + ShortTransactionId min = InvalidTransactionId,\n>\n> add function header comments.\n>\nFixed. Also, I made some refactoring to make this more clear.\n\n\n> 4.\n> Why no handling for multi? And this function has absolutely no\n> comments to understand the reason for this.\n>\nActually, this function works for multi transactions as well as for\n\"regular\" transactions.\nBut in case of \"regular\" transactions, we have to look through multi\ntransactions to\nsee if any update transactions for particular tuple is present or not.\nI add comments around here to make it clear.\n\n\n> 5.\n> Why pd_xid_base can not be just RecentXmin? Please explain in the\n> comments above.\n>\nWe're doing this, If I'm not mistaken, to be able to get all the possible\nXID's\nvalues, include InvalidTransactionId, FrozenTransactionId and so on. In\nother\nwords, me must be able to get XID's values including special ones.\n\nHere is a rebased version of a patch set. As always, reviews are very\nwelcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 16 Sep 2022 11:59:20 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "I have to say, to my embarrassment, after sending the previous email, I've\nnotice minor imperfections in a patch set caused by the last rebase.\nThese imperfections led to cf bot fail. I'll address this issue in the next\niteration in order not to generate excessive flow.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\nI have to say, to my embarrassment, after sending the previous email, I've notice minor imperfections in a patch set caused by the last rebase.These imperfections led to cf bot fail. I'll address this issue in the next iteration in order not to generate excessive flow.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 16 Sep 2022 16:59:04 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nIs this patch target to PG16 now?\n\nI want to have a look at these patches, but apply on master failed:\n\nApplying: Use 64-bit numbering of SLRU pages.\nApplying: Use 64-bit format to output XIDs\nApplying: Use 64-bit FullTransactionId instead of Epoch:xid\nApplying: Use 64-bit pages representation in SLRU callers.\nerror: patch failed: src/backend/access/transam/multixact.c:1228\nerror: src/backend/access/transam/multixact.c: patch does not apply\nPatch failed at 0004 Use 64-bit pages representation in SLRU callers.\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nIs this patch target to PG16 now?\n\nI want to have a look at these patches, but apply on master failed:\n\nApplying: Use 64-bit numbering of SLRU pages.\nApplying: Use 64-bit format to output XIDs\nApplying: Use 64-bit FullTransactionId instead of Epoch:xid\nApplying: Use 64-bit pages representation in SLRU callers.\nerror: patch failed: src/backend/access/transam/multixact.c:1228\nerror: src/backend/access/transam/multixact.c: patch does not apply\nPatch failed at 0004 Use 64-bit pages representation in SLRU callers.\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\n\n\n\nRegards,\nZhang Mingli", "msg_date": "Tue, 20 Sep 2022 15:37:47 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nWith these patches, it seems that we don’t need to handle wraparound in\nGetNextLocalTransactionId() too, as LocalTransactionId is unit64 now.\n\n```\nLocalTransactionId\nGetNextLocalTransactionId(void)\n{\n    LocalTransactionId result;\n\n    /* loop to avoid returning InvalidLocalTransactionId at wraparound */\n    do\n    {\n        result = nextLocalTransactionId++;\n    } while (!LocalTransactionIdIsValid(result));\n\n    return result;\n}\n```\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nWith these patches, it seems that we don’t need to handle wraparound in \nGetNextLocalTransactionId() too, as LocalTransactionId is unit64 now.\n\n```\nLocalTransactionId\nGetNextLocalTransactionId(void)\n{\n    LocalTransactionId result;\n\n    /* loop to avoid returning InvalidLocalTransactionId at wraparound */\n    do\n    {\n        result = nextLocalTransactionId++;\n    } while (!LocalTransactionIdIsValid(result));\n\n    return result;\n}\n```\n\n\nRegards,\nZhang Mingli", "msg_date": "Tue, 20 Sep 2022 16:15:40 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Sep 20, 2022 at 03:37:47PM +0800, Zhang Mingli wrote:\n> I want to have a look at these patches, but apply on master failed:\n\nYeah, it's likely to break every week or more often.\n\nYou have a few options:\n\n0) resolve the conflict yourself;\n\n1) apply the patch to the commit that the authors sent it against, or\nsome commit before the conflicting file(s) were changed in master. Like\nmaybe \"git checkout -b 64bitxids f66d997fd\".\n\n2) Use the last patch that cfbot successfully created. You can read the\npatch on github's web interface, or add cfbot's user as a remote to use\nthe patch locally for review and/or compilation. Something like \"git\nremote add cfbot https://github.com/postgresql-cfbot/postgresql; git\nfetch cfbot commitfest/39/3594; git checkout -b 64bitxids\ncfbot/commitfest/39/3594\". (Unfortunately, cfbot currently squishes the\npatch series into a single commit and loses the commit message).\n\nYou could also check the git link in the commitfest, to see if the\nauthor has already rebased it, but haven't yet mailed the rebased patch\nto the list. In this case, that's not true, but you could probably use\nthe author's branch on github, too.\nhttps://commitfest.postgresql.org/39/3594/\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 20 Sep 2022 04:26:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\nOn Sep 20, 2022, 17:26 +0800, Justin Pryzby <pryzby@telsasoft.com>, wrote:\n> On Tue, Sep 20, 2022 at 03:37:47PM +0800, Zhang Mingli wrote:\n> > I want to have a look at these patches, but apply on master failed:\n>\n> Yeah, it's likely to break every week or more often.\n>\n> You have a few options:\n>\n> 0) resolve the conflict yourself;\n>\n> 1) apply the patch to the commit that the authors sent it against, or\n> some commit before the conflicting file(s) were changed in master. Like\n> maybe \"git checkout -b 64bitxids f66d997fd\".\n>\n> 2) Use the last patch that cfbot successfully created. You can read the\n> patch on github's web interface, or add cfbot's user as a remote to use\n> the patch locally for review and/or compilation. Something like \"git\n> remote add cfbot https://github.com/postgresql-cfbot/postgresql; git\n> fetch cfbot commitfest/39/3594; git checkout -b 64bitxids\n> cfbot/commitfest/39/3594\". (Unfortunately, cfbot currently squishes the\n> patch series into a single commit and loses the commit message).\n>\n> You could also check the git link in the commitfest, to see if the\n> author has already rebased it, but haven't yet mailed the rebased patch\n> to the list. In this case, that's not true, but you could probably use\n> the author's branch on github, too.\n> https://commitfest.postgresql.org/39/3594/\n>\n> --\n> Justin\nGot it, thanks.\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\n\nOn Sep 20, 2022, 17:26 +0800, Justin Pryzby <pryzby@telsasoft.com>, wrote:\nOn Tue, Sep 20, 2022 at 03:37:47PM +0800, Zhang Mingli wrote:\nI want to have a look at these patches, but apply on master failed:\n\nYeah, it's likely to break every week or more often.\n\nYou have a few options:\n\n0) resolve the conflict yourself;\n\n1) apply the patch to the commit that the authors sent it against, or\nsome commit before the conflicting file(s) were changed in master. Like\nmaybe \"git checkout -b 64bitxids f66d997fd\".\n\n2) Use the last patch that cfbot successfully created. You can read the\npatch on github's web interface, or add cfbot's user as a remote to use\nthe patch locally for review and/or compilation. Something like \"git\nremote add cfbot https://github.com/postgresql-cfbot/postgresql; git\nfetch cfbot commitfest/39/3594; git checkout -b 64bitxids\ncfbot/commitfest/39/3594\". (Unfortunately, cfbot currently squishes the\npatch series into a single commit and loses the commit message).\n\nYou could also check the git link in the commitfest, to see if the\nauthor has already rebased it, but haven't yet mailed the rebased patch\nto the list. In this case, that's not true, but you could probably use\nthe author's branch on github, too.\nhttps://commitfest.postgresql.org/39/3594/\n\n--\nJustin\nGot it, thanks.\n\n\nRegards,\nZhang Mingli", "msg_date": "Tue, 20 Sep 2022 17:37:32 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nHere is a rebased version of the patch set.\nMajor changes are:\n1. Fix rare replica fault.\n Upon page pruning in heap_page_prune, page fragmentation repair is\ndetermined by\n a parameter repairFragmentation. At the same time, on a replica, upon\nhandling XLOG_HEAP2_PRUNE record type\n in heap_xlog_prune, we always call heap_page_prune_execute with\nrepairFragmentation parameter equal to true.\n This caused page inconsistency and lead to the crash of the replica. Fix\nthis by adding new flag in\n struct xl_heap_prune.\n2. Add support for meson build.\n3. Add assertion \"buffer is locked\" in HeapTupleCopyBaseFromPage.\n4. Add assertion \"buffer is locked exclusive\" in heap_page_shift_base.\n5. Prevent excessive growth of xmax in heap_prepare_freeze_tuple.\n\nAs always, reviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 7 Oct 2022 14:04:09 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, Oct 07, 2022 at 02:04:09PM +0300, Maxim Orlov wrote:\n> As always, reviews are very welcome!\n\nThis patch set needs a rebase, as far as I can see.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 16:37:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n> This patch set needs a rebase, as far as I can see.\n>\n\nDone! Thanks! Here is the rebased version.\n\nThis version has bug fix for multixact replication. Previous versions of\nthe patch set does not write pd_multi_base in WAL. Thus, this field was set\nto 0 upon WAL reply on replica.\nThis caused replica to panic. Fix this by adding pd_multi_base of a page\ninto WAL. Appropriate tap test is added.\n\nAlso, add refactoring and improvements in heapam.c in order to reduce diff\nand make it more \"tidy\".\n\nReviews and opinions are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 21 Oct 2022 19:09:15 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n\nOn Oct 22, 2022, 00:09 +0800, Maxim Orlov <orlovmg@gmail.com>, wrote:\n> >\n> > Done! Thanks! Here is the rebased version.\n> >\n> > This version has bug fix for multixact replication. Previous versions of the patch set does not write pd_multi_base in WAL. Thus, this field was set to 0 upon WAL reply on replica.\n> > This caused replica to panic. Fix this by adding pd_multi_base of a page into WAL. Appropriate tap test is added.\n> >\n> > Also, add refactoring and improvements in heapam.c in order to reduce diff and make it more \"tidy\".\n> >\n> > Reviews and opinions are very welcome!\n> >\n> > --\n> > Best regards,\n> > Maxim Orlov.\nFound some outdate code comments around several variables, such as xidWrapLimit/xidWarnLimit/xidStopLimt.\n\nThese variables are not used any more.\n\nI attach an additional V48-0009 patch as they are just comments, apply it if you want to.\n\nRegards,\nZhang Mingli", "msg_date": "Sat, 22 Oct 2022 11:21:55 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nI attach an additional V48-0009 patch as they are just comments, apply it\n> if you want to.\n>\nBig thank you for your review. I've applied your addition in the recent\npatch set below.\n\nBesides, mentioned above, next changes are made:\n- rename HeapTupleCopyBaseFromPage to HeapTupleCopyXidsFromPage, since this\nold name came from the time when еру \"t_xid_base\" was stored in tuple,\n and not correspond to recent state of the code;\n- replace ToastTupleHeader* calls with HeapHeader* with the \"is_toast\"\nargument. This reduces diff and make the code more readable;\n- put HeapTupleSetZeroXids calls in several places for the sake of\nredundancy;\n- in heap_tuple_would_freeze add case to reset xmax without reading clog;\n- rename SeqTupleHeaderSetXmax/Xmin to SeqTupleSetXmax/min and refactoring\nof the function; Now it will set HeapTuple and HeapTupleHeader xmax;\n- add case of int64 values in check_GUC_init;\n- massive refactoring in htup_details.h to use inline functions with type\ncontrol over macro;\n- reorder code in htup_details.h to reduce overall diff.\n\nAs always, reviews and opinions are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 3 Nov 2022 11:11:33 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, 3 Nov 2022 at 08:12, Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n>> I attach an additional V48-0009 patch as they are just comments, apply it if you want to.\n>\n> Big thank you for your review. I've applied your addition in the recent patch set below.\n>\n> Besides, mentioned above, next changes are made:\n> - rename HeapTupleCopyBaseFromPage to HeapTupleCopyXidsFromPage, since this old name came from the time when еру \"t_xid_base\" was stored in tuple,\n> and not correspond to recent state of the code;\n> - replace ToastTupleHeader* calls with HeapHeader* with the \"is_toast\" argument. This reduces diff and make the code more readable;\n> - put HeapTupleSetZeroXids calls in several places for the sake of redundancy;\n> - in heap_tuple_would_freeze add case to reset xmax without reading clog;\n> - rename SeqTupleHeaderSetXmax/Xmin to SeqTupleSetXmax/min and refactoring of the function; Now it will set HeapTuple and HeapTupleHeader xmax;\n> - add case of int64 values in check_GUC_init;\n> - massive refactoring in htup_details.h to use inline functions with type control over macro;\n> - reorder code in htup_details.h to reduce overall diff.\n>\n> As always, reviews and opinions are very welcome!\n\n0008 needs a rebase. heapam.h and catversion.h are failing.\n\nRegards\n\nThom\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:25:34 +0000", "msg_from": "Thom Brown <thom@linux.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": ">\n>\n> 0008 needs a rebase. heapam.h and catversion.h are failing.\n>\n> Regards\n>\n> Thom\n>\n\nThanks, done!\n\nAlso add copying of the xmin and xmax while page is locked. In heapgetpage\nwe have to copy tuples xmin and xmax while\nwe're holding a lock. Since we do not hold a lock after that, values of\nxmin or xmax may be changed, and we may get\nincorrect values of those fields. This affects only the scenario when the\nuser select xmin or xmax \"directly\" by SQL query, AFAICS.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 14 Nov 2022 16:56:07 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> Thanks, done!\n\nDilip Kumar asked a good question in the thread about the 0001..0003\nsubset [1]. I would like to duplicate it here to make sure it was not\nmissed by mistake:\n\n\"\"\"\nHave we measured the WAL overhead because of this patch set? maybe\nthese particular patches will not impact but IIUC this is ground work\nfor making xid 64 bit. So each XLOG record size will increase at\nleast by 4 bytes because the XLogRecord contains the xid.\n\"\"\"\n\nDo we have an estimate on this?\n\n[1]: https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 14 Nov 2022 21:07:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> Dilip Kumar asked a good question in the thread about the 0001..0003\n> subset [1]. I would like to duplicate it here to make sure it was not\n> missed by mistake:\n>\n> \"\"\"\n> Have we measured the WAL overhead because of this patch set? maybe\n> these particular patches will not impact but IIUC this is ground work\n> for making xid 64 bit. So each XLOG record size will increase at\n> least by 4 bytes because the XLogRecord contains the xid.\n> \"\"\"\n>\n> Do we have an estimate on this?\n\nI decided to simulate one completely synthetic and non-representative\nscenario when we write multiple small WAL records.\n\nHere is what I did:\n\n$ psql -c 'CREATE TABLE phonebook(\n \"id\" SERIAL PRIMARY KEY NOT NULL,\n \"name\" TEXT NOT NULL,\n \"phone\" INT NOT NULL);'\n\n$ echo 'INSERT INTO phonebook (name, phone) VALUES (random(),\nrandom());' > t.sql\n\n$ pgbench -j 8 -c 8 -f t.sql -T 60 eax\n\n== 32-bit XIDs ==\n\nBranch: https://github.com/afiskon/postgres/tree/64bit_xids_v50_14Nov_without_patch\n\npgbench output:\n\n```\nnumber of transactions actually processed: 68650\nnumber of failed transactions: 0 (0.000%)\nlatency average = 6.993 ms\ninitial connection time = 5.415 ms\ntps = 1144.074340 (without initial connection time)\n```\n\n$ ls -lah /home/eax/projects/pginstall/data-master/pg_wal\n...\n-rw------- 1 eax eax 16M Nov 16 12:48 000000010000000000000002\n-rw------- 1 eax eax 16M Nov 16 12:47 000000010000000000000003\n\n$ pg_waldump -p ~/projects/pginstall/data-master/pg_wal\n000000010000000000000002 000000010000000000000003 | perl -e 'while(<>)\n{ $_ =~ m#len \\(rec/tot\\):\\s*(\\d+)/\\s*(\\d+),#; $rec += $1; $tot += $2;\n$count++; } $rec /= $count; $tot /= $count; print \"rec: $rec, tot:\n$tot\\n\";'\n\npg_waldump: error: error in WAL record at 0/28A4118: invalid record\nlength at 0/28A4190: wanted 24, got 0\nrec: 65.8201835569952, tot: 67.3479022057689\n\n== 64-bit XIDs ==\n\nBranch: https://github.com/afiskon/postgres/tree/64bit_xids_v50_14Nov\n\npgbench output:\n\n```\nnumber of transactions actually processed: 68744\nnumber of failed transactions: 0 (0.000%)\nlatency average = 6.983 ms\ninitial connection time = 5.334 ms\ntps = 1145.664765 (without initial connection time)\n```\n\n$ ls -lah /home/eax/projects/pginstall/data-master/pg_wal\n...\n-rw------- 1 eax eax 16M Nov 16 12:32 000000010000000000000002\n-rw------- 1 eax eax 16M Nov 16 12:31 000000010000000000000003\n\n$ pg_waldump -p ~/projects/pginstall/data-master/pg_wal\n000000010000000000000002 000000010000000000000003 | perl -e 'while(<>)\n{ $_ =~ m#len \\(rec/tot\\):\\s*(\\d+)/\\s*(\\d+),#; $rec += $1; $tot += $2;\n$count++; } $rec /= $count; $tot /= $count; print \"rec: $rec, tot:\n$tot\\n\";'\n\npg_waldump: error: error in WAL record at 0/29F4778: invalid record\nlength at 0/29F4810: wanted 26, got 0\nrec: 69.1783950928736, tot: 70.6413934278527\n\nSo under this load with 64-bit XIDs we see ~5% penalty in terms of the\nWAL size. This seems to be expected considering the fact that\nsizeof(XLogRecord) became 4 bytes larger and the average record size\nwas about 66 bytes.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 16 Nov 2022 12:57:27 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nI have a very serious concern about the current patch set. as someone who has faced transaction id wraparound in the past.\r\n\r\nI can start by saying I think it would be helpful (if the other issues are approached reasonably) to have 64-bit xids, but there is an important piece of context in reventing xid wraparounds that seems missing from this patch unless I missed something.\r\n\r\nXID wraparound is a symptom, not an underlying problem. It usually occurs when autovacuum or other vacuum strategies have unexpected stalls and therefore fail to work as expected. Shifting to 64-bit XIDs dramatically changes the sorts of problems that these stalls are likely to pose to operational teams. -- you can find you are running out of storage rather than facing an imminent database shutdown. Worse, this patch delays the problem until some (possibly far later!) time, when vacuum will take far longer to finish, and options for resolving the problem are diminished. As a result I am concerned that merely changing xids from 32-bit to 64-bit will lead to a smaller number of far more serious outages.\r\n\r\nWhat would make a big difference from my perspective would be to combine this with an inverse system for warning that there is a problem, allowing the administrator to throw warnings about xids since last vacuum, with a configurable threshold. We could have this at two billion by default as that would pose operational warnings not much later than we have now.\r\n\r\nOtherwise I can imagine cases where instead of 30 hours to vacuum a table, it takes 300 hours on a database that is short on space. And I would not want to be facing such a situation.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 21 Nov 2022 07:58:00 +0000", "msg_from": "Chris Travers <chris.travers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "> I have a very serious concern about the current patch set. as someone who has faced transaction id wraparound in the past.\n>\n> I can start by saying I think it would be helpful (if the other issues are approached reasonably) to have 64-bit xids, but there is an important piece of context in reventing xid wraparounds that seems missing from this patch unless I missed something.\n>\n> XID wraparound is a symptom, not an underlying problem. It usually occurs when autovacuum or other vacuum strategies have unexpected stalls and therefore fail to work as expected. Shifting to 64-bit XIDs dramatically changes the sorts of problems that these stalls are likely to pose to operational teams. -- you can find you are running out of storage rather than facing an imminent database shutdown. Worse, this patch delays the problem until some (possibly far later!) time, when vacuum will take far longer to finish, and options for resolving the problem are diminished. As a result I am concerned that merely changing xids from 32-bit to 64-bit will lead to a smaller number of far more serious outages.\n>\n> What would make a big difference from my perspective would be to combine this with an inverse system for warning that there is a problem, allowing the administrator to throw warnings about xids since last vacuum, with a configurable threshold. We could have this at two billion by default as that would pose operational warnings not much later than we have now.\n>\n> Otherwise I can imagine cases where instead of 30 hours to vacuum a table, it takes 300 hours on a database that is short on space. And I would not want to be facing such a situation.\n\nHi, Chris!\nI had a similar stance when I started working on this patch. Of\ncourse, it seemed horrible just to postpone the consequences of\ninadequate monitoring, too long running transactions that prevent\naggressive autovacuum etc. So I can understand your point.\n\nWith time I've got to a little bit of another view of this feature i.e.\n\n1. It's important to correctly set monitoring, the cut-off of long\ntransactions, etc. anyway. It's not the responsibility of vacuum\nbefore wraparound to report inadequate monitoring etc. Furthermore, in\nreal life, this will be already too late if it prevents 32-bit\nwraparound and invokes much downtime in an unexpected moment of time\nif it occurs already. (The rough analogy for that is the machine\nrunning at 120mph turns every control off and applies full brakes just\nbecause the cooling liquid is low (of course there might be a warning\npreviously, but anyway))\n\n2. The checks and handlers for the event that is never expected in the\ncluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\ndropped. Of course we still need to do automatic routine maintenance\nlike cutting SLRU buffers (but with a much bigger interval if we have\nmuch disk space e.g.). But I considered that we either can not care\nwhat will be with cluster after > 200 years (it will be migrated many\ntimes before this, on many reasons not related to Postgres even for\nthe most conservative owners). So the radical proposal is to drop\n64-bit wraparound at all. The most moderate one is just not taking\nvery much care that after 200 years we have more hassle than next\nmonth if we haven't set up everything correctly. Next month's pain\nwill be more significant even if it teaches dba something.\n\nBig thanks for your view on the general implementation of this feature, anyway.\n\nKind regards,\nPavel Borisov.\nSupabase\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:39:03 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> > I have a very serious concern about the current patch set. as someone who has faced transaction id wraparound in the past.\n>\n> [...]\n>\n> I had a similar stance when I started working on this patch. Of\n> course, it seemed horrible just to postpone the consequences of\n> inadequate monitoring, too long running transactions that prevent\n> aggressive autovacuum etc. So I can understand your point.\n>\n> With time I've got to a little bit of another view of this feature i.e.\n>\n> 1. It's important to correctly set monitoring, the cut-off of long\n> transactions, etc. anyway. It's not the responsibility of vacuum\n> before wraparound to report inadequate monitoring etc. Furthermore, in\n> real life, this will be already too late if it prevents 32-bit\n> wraparound and invokes much downtime in an unexpected moment of time\n> if it occurs already. (The rough analogy for that is the machine\n> running at 120mph turns every control off and applies full brakes just\n> because the cooling liquid is low (of course there might be a warning\n> previously, but anyway))\n>\n> 2. The checks and handlers for the event that is never expected in the\n> cluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\n> dropped. Of course we still need to do automatic routine maintenance\n> like cutting SLRU buffers (but with a much bigger interval if we have\n> much disk space e.g.). But I considered that we either can not care\n> what will be with cluster after > 200 years (it will be migrated many\n> times before this, on many reasons not related to Postgres even for\n> the most conservative owners). So the radical proposal is to drop\n> 64-bit wraparound at all. The most moderate one is just not taking\n> very much care that after 200 years we have more hassle than next\n> month if we haven't set up everything correctly. Next month's pain\n> will be more significant even if it teaches dba something.\n>\n> Big thanks for your view on the general implementation of this feature, anyway.\n\nI'm inclined to agree with Pavel on this one. Keeping 32-bit XIDs in\norder to intentionally trigger XID wraparound to indicate the ending\ndisk space and/or misconfigured system (by the time when it's usually\ntoo late anyway) is a somewhat arguable perspective. It would be great\nto notify the user about the potential issues with the configuration\nand/or the fact that VACUUM doesn't catch up. But it doesn't mean we\nshould keep 32-bit XIDs in order to achive this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 21 Nov 2022 14:25:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Question about\n\n\"\"\"\nSubject: [PATCH v50 5/8] Add initdb option to initialize cluster with\n non-standard xid/mxid/mxoff.\n\nTo date testing database cluster wraparund was not easy as initdb has always\ninited it with default xid/mxid/mxoff. The option to specify any valid\nxid/mxid/mxoff at cluster startup will make these things easier.\n\"\"\"\n\nDoesn't pg_resetwal already provide that functionality, or at least some \nof it?\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 20:05:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> To date testing database cluster wraparund was not easy as initdb has always\n>> inited it with default xid/mxid/mxoff. The option to specify any valid\n>> xid/mxid/mxoff at cluster startup will make these things easier.\n\n> Doesn't pg_resetwal already provide that functionality, or at least some \n> of it?\n\npg_resetwal does seem like a better, more useful home for this; it'd\nallow you to adjust these numbers after initial creation which might be\nuseful. I'm not sure how flexible it is right now in terms of where\nyou can set the new values, but that can always be improved.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 14:21:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 14:21:35 -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >> To date testing database cluster wraparund was not easy as initdb has always\n> >> inited it with default xid/mxid/mxoff. The option to specify any valid\n> >> xid/mxid/mxoff at cluster startup will make these things easier.\n>\n> > Doesn't pg_resetwal already provide that functionality, or at least some\n> > of it?\n>\n> pg_resetwal does seem like a better, more useful home for this; it'd\n> allow you to adjust these numbers after initial creation which might be\n> useful. I'm not sure how flexible it is right now in terms of where\n> you can set the new values, but that can always be improved.\n\nIIRC the respective pg_resetwal parameters are really hard to use for\nsomething like this, because they don't actually create the respective\nSLRU segments. We of course could fix that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:15:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-21 14:21:35 -0500, Tom Lane wrote:\n>> pg_resetwal does seem like a better, more useful home for this; it'd\n>> allow you to adjust these numbers after initial creation which might be\n>> useful. I'm not sure how flexible it is right now in terms of where\n>> you can set the new values, but that can always be improved.\n\n> IIRC the respective pg_resetwal parameters are really hard to use for\n> something like this, because they don't actually create the respective\n> SLRU segments. We of course could fix that.\n\nIs that still true? We should fix it, for sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:16:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 15:16:46 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-21 14:21:35 -0500, Tom Lane wrote:\n> >> pg_resetwal does seem like a better, more useful home for this; it'd\n> >> allow you to adjust these numbers after initial creation which might be\n> >> useful. I'm not sure how flexible it is right now in terms of where\n> >> you can set the new values, but that can always be improved.\n> \n> > IIRC the respective pg_resetwal parameters are really hard to use for\n> > something like this, because they don't actually create the respective\n> > SLRU segments. We of course could fix that.\n> \n> Is that still true? We should fix it, for sure.\n\nSure looks that way to me. I think it might mostly work if you manage to\nfind nextXid, nextMulti, nextMultiOffset values that each point to the\nstart of a segment that'd then be created whenever those values are\nused.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:20:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:25 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> > > I have a very serious concern about the current patch set. as someone\n> who has faced transaction id wraparound in the past.\n> >\n> > [...]\n> >\n> > I had a similar stance when I started working on this patch. Of\n> > course, it seemed horrible just to postpone the consequences of\n> > inadequate monitoring, too long running transactions that prevent\n> > aggressive autovacuum etc. So I can understand your point.\n> >\n> > With time I've got to a little bit of another view of this feature i.e.\n> >\n> > 1. It's important to correctly set monitoring, the cut-off of long\n> > transactions, etc. anyway. It's not the responsibility of vacuum\n> > before wraparound to report inadequate monitoring etc. Furthermore, in\n> > real life, this will be already too late if it prevents 32-bit\n> > wraparound and invokes much downtime in an unexpected moment of time\n> > if it occurs already. (The rough analogy for that is the machine\n> > running at 120mph turns every control off and applies full brakes just\n> > because the cooling liquid is low (of course there might be a warning\n> > previously, but anyway))\n> >\n> > 2. The checks and handlers for the event that is never expected in the\n> > cluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\n> > dropped. Of course we still need to do automatic routine maintenance\n> > like cutting SLRU buffers (but with a much bigger interval if we have\n> > much disk space e.g.). But I considered that we either can not care\n> > what will be with cluster after > 200 years (it will be migrated many\n> > times before this, on many reasons not related to Postgres even for\n> > the most conservative owners). So the radical proposal is to drop\n> > 64-bit wraparound at all. The most moderate one is just not taking\n> > very much care that after 200 years we have more hassle than next\n> > month if we haven't set up everything correctly. Next month's pain\n> > will be more significant even if it teaches dba something.\n> >\n> > Big thanks for your view on the general implementation of this feature,\n> anyway.\n>\n> I'm inclined to agree with Pavel on this one. Keeping 32-bit XIDs in\n> order to intentionally trigger XID wraparound to indicate the ending\n> disk space and/or misconfigured system (by the time when it's usually\n> too late anyway) is a somewhat arguable perspective. It would be great\n> to notify the user about the potential issues with the configuration\n> and/or the fact that VACUUM doesn't catch up. But it doesn't mean we\n> should keep 32-bit XIDs in order to achive this.\n>\n\nThat's not what I am suggesting. However I am saying that removing a\nsymptom of a problem so you get bit when you are in an even worse position\nis a bad idea, and I would strenuously oppose including a patchset like\nthis without some sort of mitigating measures.\n\nWhat I think should be added to address this concern is a GUC variable of\nwarn_max_xid_lag, and then change the logic which warns of impending xid\nwraparound to logic that warns of xid lag reaching a value in excess of\nthis threshold.\n\nDatabases under load take a long time to correct problems and throwing new\nproblems onto DBA-land and saying \"you figure out your monitoring\" is not\nsomething I want to support. Again, this comes from experience facing xid\nwraparound issues. I have nothing against 64-bit xids. But I think the\npatch ought not to delay onset of visible symptoms, that instead the focus\nshould be on making sure that efforts to address those symptoms can be\nhandled using less extreme measures and at a bit of a more relaxed pace.\n\n\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\nOn Mon, Nov 21, 2022 at 12:25 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> > I have a very serious concern about the current patch set. as someone who has faced transaction id wraparound in the past.\n>\n> [...]\n>\n> I had a similar stance when I started working on this patch. Of\n> course, it seemed horrible just to postpone the consequences of\n> inadequate monitoring, too long running transactions that prevent\n> aggressive autovacuum etc. So I can understand your point.\n>\n> With time I've got to a little bit of another view of this feature i.e.\n>\n> 1. It's important to correctly set monitoring, the cut-off of long\n> transactions, etc. anyway. It's not the responsibility of vacuum\n> before wraparound to report inadequate monitoring etc. Furthermore, in\n> real life, this will be already too late if it prevents 32-bit\n> wraparound and invokes much downtime in an unexpected moment of time\n> if it occurs already. (The rough analogy for that is the machine\n> running at 120mph turns every control off and applies full brakes just\n> because the cooling liquid is low (of course there might be a warning\n> previously, but anyway))\n>\n> 2. The checks and handlers for the event that is never expected in the\n> cluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\n> dropped. Of course we still need to do automatic routine maintenance\n> like cutting SLRU buffers (but with a much bigger interval if we have\n> much disk space e.g.). But I considered that we either can not care\n> what will be with cluster after > 200 years (it will be migrated many\n> times before this, on many reasons not related to Postgres even for\n> the most conservative owners). So the radical proposal is to drop\n> 64-bit wraparound at all. The most moderate one is just not taking\n> very much care that after 200 years we have more hassle than next\n> month if we haven't set up everything correctly. Next month's pain\n> will be more significant even if it teaches dba something.\n>\n> Big thanks for your view on the general implementation of this feature, anyway.\n\nI'm inclined to agree with Pavel on this one. Keeping 32-bit XIDs in\norder to intentionally trigger XID wraparound to indicate the ending\ndisk space and/or misconfigured system (by the time when it's usually\ntoo late anyway) is a somewhat arguable perspective. It would be great\nto notify the user about the potential issues with the configuration\nand/or the fact that VACUUM doesn't catch up. But it doesn't mean we\nshould keep 32-bit XIDs in order to achive this.That's not what I am suggesting.  However I am saying that removing a symptom of a problem so you get bit when you are in an even worse position is a bad idea, and I would strenuously oppose including a patchset like this without some sort of mitigating measures.What I think should be added to address this concern is a GUC variable of warn_max_xid_lag, and then change the logic which warns of impending xid wraparound to logic that warns of xid lag reaching a value in excess of this threshold.Databases under load take a long time to correct problems and throwing new problems onto DBA-land and saying \"you figure out your monitoring\" is not something I want to support.  Again, this comes from experience facing xid wraparound issues.  I have nothing against 64-bit xids.  But I think the patch ought not to delay onset of visible symptoms, that instead the focus should be on making sure that efforts to address those symptoms can be handled using less extreme measures and at a bit of a more relaxed pace. \n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 22 Nov 2022 03:38:58 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:40 AM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> > I have a very serious concern about the current patch set. as someone\n> who has faced transaction id wraparound in the past.\n> >\n> > I can start by saying I think it would be helpful (if the other issues\n> are approached reasonably) to have 64-bit xids, but there is an important\n> piece of context in reventing xid wraparounds that seems missing from this\n> patch unless I missed something.\n> >\n> > XID wraparound is a symptom, not an underlying problem. It usually\n> occurs when autovacuum or other vacuum strategies have unexpected stalls\n> and therefore fail to work as expected. Shifting to 64-bit XIDs\n> dramatically changes the sorts of problems that these stalls are likely to\n> pose to operational teams. -- you can find you are running out of storage\n> rather than facing an imminent database shutdown. Worse, this patch delays\n> the problem until some (possibly far later!) time, when vacuum will take\n> far longer to finish, and options for resolving the problem are\n> diminished. As a result I am concerned that merely changing xids from\n> 32-bit to 64-bit will lead to a smaller number of far more serious outages.\n> >\n> > What would make a big difference from my perspective would be to combine\n> this with an inverse system for warning that there is a problem, allowing\n> the administrator to throw warnings about xids since last vacuum, with a\n> configurable threshold. We could have this at two billion by default as\n> that would pose operational warnings not much later than we have now.\n> >\n> > Otherwise I can imagine cases where instead of 30 hours to vacuum a\n> table, it takes 300 hours on a database that is short on space. And I\n> would not want to be facing such a situation.\n>\n> Hi, Chris!\n> I had a similar stance when I started working on this patch. Of\n> course, it seemed horrible just to postpone the consequences of\n> inadequate monitoring, too long running transactions that prevent\n> aggressive autovacuum etc. So I can understand your point.\n>\n> With time I've got to a little bit of another view of this feature i.e.\n>\n> 1. It's important to correctly set monitoring, the cut-off of long\n> transactions, etc. anyway. It's not the responsibility of vacuum\n> before wraparound to report inadequate monitoring etc. Furthermore, in\n> real life, this will be already too late if it prevents 32-bit\n> wraparound and invokes much downtime in an unexpected moment of time\n> if it occurs already. (The rough analogy for that is the machine\n> running at 120mph turns every control off and applies full brakes just\n> because the cooling liquid is low (of course there might be a warning\n> previously, but anyway))\n>\n\nSo I disagree with you on a few critical points here.\n\nRight now the way things work is:\n1. Database starts throwing warnings that xid wraparound is approaching\n2. Database-owning team initiates an emergency response, may take downtime\nor degradation of services as a result\n3. People get frustrated with PostgreSQL because this is a reliability\nproblem.\n\nWhat I am worried about is:\n1. Database is running out of space\n2. Database-owning team initiates an emergency response and takes more\ndowntime to into a good spot\n3. People get frustrated with PostgreSQL because this is a reliability\nproblem.\n\nIf that's the way we go, I don't think we've solved that much. And as\nhumans we also bias our judgments towards newsworthy events, so rarer, more\nsevere problems are a larger perceived problem than the more routine, less\nsevere problems. So I think our image as a reliable database would suffer.\n\nAn ideal resolution from my perspective would be:\n1. Database starts throwing warnings that xid lag has reached severely\nabnormal levels\n2. Database owning team initiates an effort to correct this, and does not\ntake downtime or degradation of services as a result\n3. People do not get frustrated because this is not a reliability problem\nanymore.\n\nNow, 64-big xids are necessary to get us there but they are not\nsufficient. One needs to fix the way we handle this sort of problem.\nThere is existing logic to warn if we are approaching xid wraparound. This\nshould be changed to check how many xids we have used rather than remaining\nand have a sensible default there (optionally configurable).\n\nI agree it is not vacuum's responsibility. It is the responsibility of the\ncurrent warnings we have to avoid more serious problems arising from this\nchange. These should just be adjusted rather than dropped.\n\n\n> 2. The checks and handlers for the event that is never expected in the\n> cluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\n> dropped. Of course we still need to do automatic routine maintenance\n> like cutting SLRU buffers (but with a much bigger interval if we have\n> much disk space e.g.). But I considered that we either can not care\n> what will be with cluster after > 200 years (it will be migrated many\n> times before this, on many reasons not related to Postgres even for\n> the most conservative owners). So the radical proposal is to drop\n> 64-bit wraparound at all. The most moderate one is just not taking\n> very much care that after 200 years we have more hassle than next\n> month if we haven't set up everything correctly. Next month's pain\n> will be more significant even if it teaches dba something.\n>\n> Big thanks for your view on the general implementation of this feature,\n> anyway.\n>\n> Kind regards,\n> Pavel Borisov.\n> Supabase\n>\n>\n>\n\nOn Mon, Nov 21, 2022 at 10:40 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:> I have a very serious concern about the current patch set. as someone who has faced transaction id wraparound in the past.\n>\n> I can start by saying I think it would be helpful (if the other issues are approached reasonably) to have 64-bit xids, but there is an important piece of context in reventing xid wraparounds that seems missing from this patch unless I missed something.\n>\n> XID wraparound is a symptom, not an underlying problem.  It usually occurs when autovacuum or other vacuum strategies have unexpected stalls and therefore fail to work as expected.  Shifting to 64-bit XIDs dramatically changes the sorts of problems that these stalls are likely to pose to operational teams.  -- you can find you are running out of storage rather than facing an imminent database shutdown.  Worse, this patch delays the problem until some (possibly far later!) time, when vacuum will take far longer to finish, and options for resolving the problem are diminished.  As a result I am concerned that merely changing xids from 32-bit to 64-bit will lead to a smaller number of far more serious outages.\n>\n> What would make a big difference from my perspective would be to combine this with an inverse system for warning that there is a problem, allowing the administrator to throw warnings about xids since last vacuum, with a configurable threshold.  We could have this at two billion by default as that would pose operational warnings not much later than we have now.\n>\n> Otherwise I can imagine cases where instead of 30 hours to vacuum a table, it takes 300 hours on a database that is short on space.  And I would not want to be facing such a situation.\n\nHi, Chris!\nI had a similar stance when I started working on this patch. Of\ncourse, it seemed horrible just to postpone the consequences of\ninadequate monitoring, too long running transactions that prevent\naggressive autovacuum etc. So I can understand your point.\n\nWith time I've got to a little bit of another view of this feature i.e.\n\n1. It's important to correctly set monitoring, the cut-off of long\ntransactions, etc. anyway. It's not the responsibility of vacuum\nbefore wraparound to report inadequate monitoring etc. Furthermore, in\nreal life, this will be already too late if it prevents 32-bit\nwraparound and invokes much downtime in an unexpected moment of time\nif it occurs already. (The rough analogy for that is the machine\nrunning at 120mph turns every control off and applies full brakes just\nbecause the cooling liquid is low (of course there might be a warning\npreviously, but anyway))So I disagree with you on a few critical points here.Right now the way things work is:1.  Database starts throwing warnings that xid wraparound is approaching2.  Database-owning team initiates an emergency response, may take downtime or degradation of services as a result3.  People get frustrated with PostgreSQL because this is a reliability problem.What I am worried about is:1.  Database is running out of space2.  Database-owning team initiates an emergency response and takes more downtime to into a good spot3.  People get frustrated with PostgreSQL because this is a reliability problem.If that's the way we go, I don't think we've solved that much.  And as humans we also bias our judgments towards newsworthy events, so rarer, more severe problems are a larger perceived problem than the more routine, less severe problems.  So I think our image as a reliable database would suffer.An ideal resolution from my perspective would be:1.  Database starts throwing warnings that xid lag has reached severely abnormal levels2.  Database owning team initiates an effort to correct this, and does not take downtime or degradation of services as a result3.  People do not get frustrated because this is not a reliability problem anymore.Now, 64-big xids are necessary to get us there but they are not sufficient.  One needs to fix the way we handle this sort of problem.  There is existing logic to warn if we are approaching xid wraparound.  This should be changed to check how many xids we have used rather than remaining and have a sensible default there (optionally configurable).I agree it is not vacuum's responsibility.  It is the responsibility of the current warnings we have to avoid more serious problems arising from this change.  These should just be adjusted rather than dropped.\n2. The checks and handlers for the event that is never expected in the\ncluster lifetime (~200 years at constant rate of 1e6 TPS) can be just\ndropped. Of course we still need to do automatic routine maintenance\nlike cutting SLRU buffers (but with a much bigger interval if we have\nmuch disk space e.g.). But I considered that we either can not care\nwhat will be with cluster after > 200 years (it will be migrated many\ntimes before this, on many reasons not related to Postgres even for\nthe most conservative owners). So the radical proposal is to drop\n64-bit wraparound at all. The most moderate one is just not taking\nvery much care that after 200 years we have more hassle than next\nmonth if we haven't set up everything correctly. Next month's pain\nwill be more significant even if it teaches dba something.\n\nBig thanks for your view on the general implementation of this feature, anyway.\n\nKind regards,\nPavel Borisov.\nSupabase", "msg_date": "Tue, 22 Nov 2022 03:50:07 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Chris,\n\n> Right now the way things work is:\n> 1. Database starts throwing warnings that xid wraparound is approaching\n> 2. Database-owning team initiates an emergency response, may take downtime or degradation of services as a result\n> 3. People get frustrated with PostgreSQL because this is a reliability problem.\n>\n> What I am worried about is:\n> 1. Database is running out of space\n> 2. Database-owning team initiates an emergency response and takes more downtime to into a good spot\n> 3. People get frustrated with PostgreSQL because this is a reliability problem.\n>\n> If that's the way we go, I don't think we've solved that much. And as humans we also bias our judgments towards newsworthy events, so rarer, more severe problems are a larger perceived problem than the more routine, less severe problems. So I think our image as a reliable database would suffer.\n>\n> An ideal resolution from my perspective would be:\n> 1. Database starts throwing warnings that xid lag has reached severely abnormal levels\n> 2. Database owning team initiates an effort to correct this, and does not take downtime or degradation of services as a result\n> 3. People do not get frustrated because this is not a reliability problem anymore.\n>\n> Now, 64-big xids are necessary to get us there but they are not sufficient. One needs to fix the way we handle this sort of problem. There is existing logic to warn if we are approaching xid wraparound. This should be changed to check how many xids we have used rather than remaining and have a sensible default there (optionally configurable).\n>\n> I agree it is not vacuum's responsibility. It is the responsibility of the current warnings we have to avoid more serious problems arising from this change. These should just be adjusted rather than dropped.\n\nI disagree with the axiom that XID wraparound is merely a symptom and\nnot a problem.\n\nUsing 32-bit XIDs was a reasonable design decision back when disk\nspace was limited and disks were slow. The drawback of this approach\nis the need to do the wraparound but agaig back then it was a\nreasonable design choice. If XIDs were 64-bit from the beginning users\ncould run one billion (1,000,000,000) TPS for 584 years without a\nwraparound. We wouldn't have it similarly as there is no wraparound\nfor WAL segments. Now when disks are much faster and much cheaper\n32-bit XIDs are almost certainly not a good design choice anymore.\n(Especially considering the fact that this particular patch mitigates\nthe problem of increased disk consumption greatly.)\n\nAlso I disagree with an argument that a DBA that doesn't monitor disk\nspace would care much about some strange warnings in the logs. If a\nDBA doesn't monitor basic system metrics I'm afraid we can't help this\nperson much.\n\nI do agree that we could probably provide some additional help for the\nrest of the users when it comes to configuring VACUUM. This is indeed\nnon-trivial. However I don't think this is in scope of this particular\npatchset. I suggest we keep the focus in this discussion. If you have\na concrete proposal please consider starting a new thread.\n\nThis at least is my personal opinion. Let's give the rest of the\ncommunity a chance to share their thoughts.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 22 Nov 2022 12:00:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, Alexander!\n\nOn Tue, 22 Nov 2022 at 13:01, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Chris,\n>\n> > Right now the way things work is:\n> > 1. Database starts throwing warnings that xid wraparound is approaching\n> > 2. Database-owning team initiates an emergency response, may take downtime or degradation of services as a result\n> > 3. People get frustrated with PostgreSQL because this is a reliability problem.\n> >\n> > What I am worried about is:\n> > 1. Database is running out of space\n> > 2. Database-owning team initiates an emergency response and takes more downtime to into a good spot\n> > 3. People get frustrated with PostgreSQL because this is a reliability problem.\n> >\n> > If that's the way we go, I don't think we've solved that much. And as humans we also bias our judgments towards newsworthy events, so rarer, more severe problems are a larger perceived problem than the more routine, less severe problems. So I think our image as a reliable database would suffer.\n> >\n> > An ideal resolution from my perspective would be:\n> > 1. Database starts throwing warnings that xid lag has reached severely abnormal levels\n> > 2. Database owning team initiates an effort to correct this, and does not take downtime or degradation of services as a result\n> > 3. People do not get frustrated because this is not a reliability problem anymore.\n> >\n> > Now, 64-big xids are necessary to get us there but they are not sufficient. One needs to fix the way we handle this sort of problem. There is existing logic to warn if we are approaching xid wraparound. This should be changed to check how many xids we have used rather than remaining and have a sensible default there (optionally configurable).\n> >\n> > I agree it is not vacuum's responsibility. It is the responsibility of the current warnings we have to avoid more serious problems arising from this change. These should just be adjusted rather than dropped.\n>\n> I disagree with the axiom that XID wraparound is merely a symptom and\n> not a problem.\n>\n> Using 32-bit XIDs was a reasonable design decision back when disk\n> space was limited and disks were slow. The drawback of this approach\n> is the need to do the wraparound but agaig back then it was a\n> reasonable design choice. If XIDs were 64-bit from the beginning users\n> could run one billion (1,000,000,000) TPS for 584 years without a\n> wraparound. We wouldn't have it similarly as there is no wraparound\n> for WAL segments. Now when disks are much faster and much cheaper\n> 32-bit XIDs are almost certainly not a good design choice anymore.\n> (Especially considering the fact that this particular patch mitigates\n> the problem of increased disk consumption greatly.)\n>\n> Also I disagree with an argument that a DBA that doesn't monitor disk\n> space would care much about some strange warnings in the logs. If a\n> DBA doesn't monitor basic system metrics I'm afraid we can't help this\n> person much.\n>\n> I do agree that we could probably provide some additional help for the\n> rest of the users when it comes to configuring VACUUM. This is indeed\n> non-trivial. However I don't think this is in scope of this particular\n> patchset. I suggest we keep the focus in this discussion. If you have\n> a concrete proposal please consider starting a new thread.\n>\n> This at least is my personal opinion. Let's give the rest of the\n> community a chance to share their thoughts.\n\nI agree with Alexander, that notifications for DBA are a little bit\noutside the scope of the activity in this thread unless we've just\ndropped some existing notifications, considering they're not\nsignificant anymore. If that was the point, please Chris mention what\nexisting notifications you want to return. I don't think it's a big\ndeal to have the patch with certain notifications inherited from\nMaster branch.\n\nKind regards,\nPavel Borisov\nSupabase.\n\n\n", "msg_date": "Tue, 22 Nov 2022 18:00:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n[ Excluding my personal e-mail from cc:, not sure how it got there.\nPlease don't cc: to afiskon@gmail.com, I'm not using it for reading\npgsql-hackers@. ]\n\n> I agree with Alexander, that notifications for DBA are a little bit\n> outside the scope of the activity in this thread unless we've just\n> dropped some existing notifications, considering they're not\n> significant anymore. If that was the point, please Chris mention what\n> existing notifications you want to return. I don't think it's a big\n> deal to have the patch with certain notifications inherited from\n> Master branch.\n\nTo clarify a bit: currently we DO notify the user about the upcoming\nwraparound point [1]:\n\n\"\"\"\nIf for some reason autovacuum fails to clear old XIDs from a table,\nthe system will begin to emit warning messages like this when the\ndatabase's oldest XIDs reach forty million transactions from the\nwraparound point:\n\nWARNING: database \"mydb\" must be vacuumed within 39985967 transactions\nHINT: To avoid a database shutdown, execute a database-wide VACUUM in\nthat database.\n\"\"\"\n\nSo I'm not sure how the notification Chris proposes should differ or\nwhy it is in scope of this patch. If the point was to make sure\ncertain existing notifications will be preserved - sure, why not.\n\n[1]: https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:14:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 22, 2022 at 7:44 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> [ Excluding my personal e-mail from cc:, not sure how it got there.\n> Please don't cc: to afiskon@gmail.com, I'm not using it for reading\n> pgsql-hackers@. ]\n>\n> > I agree with Alexander, that notifications for DBA are a little bit\n> > outside the scope of the activity in this thread unless we've just\n> > dropped some existing notifications, considering they're not\n> > significant anymore. If that was the point, please Chris mention what\n> > existing notifications you want to return. I don't think it's a big\n> > deal to have the patch with certain notifications inherited from\n> > Master branch.\n>\n> To clarify a bit: currently we DO notify the user about the upcoming\n> wraparound point [1]:\n>\n> \"\"\"\n> If for some reason autovacuum fails to clear old XIDs from a table,\n> the system will begin to emit warning messages like this when the\n> database's oldest XIDs reach forty million transactions from the\n> wraparound point:\n>\n> WARNING: database \"mydb\" must be vacuumed within 39985967 transactions\n> HINT: To avoid a database shutdown, execute a database-wide VACUUM in\n> that database.\n> \"\"\"\n>\n> So I'm not sure how the notification Chris proposes should differ or\n> why it is in scope of this patch. If the point was to make sure\n> certain existing notifications will be preserved - sure, why not.\n\nIMHO, after having 64-bit XID this WARNING doesn't really make sense.\nThose warnings exist because those limits were problematic for 32-bit\nxid but now it is not so I think we should not have such warnings.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 19:55:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 22, 2022 at 10:01 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Chris,\n>\n> > Right now the way things work is:\n> > 1. Database starts throwing warnings that xid wraparound is approaching\n> > 2. Database-owning team initiates an emergency response, may take\n> downtime or degradation of services as a result\n> > 3. People get frustrated with PostgreSQL because this is a reliability\n> problem.\n> >\n> > What I am worried about is:\n> > 1. Database is running out of space\n> > 2. Database-owning team initiates an emergency response and takes more\n> downtime to into a good spot\n> > 3. People get frustrated with PostgreSQL because this is a reliability\n> problem.\n> >\n> > If that's the way we go, I don't think we've solved that much. And as\n> humans we also bias our judgments towards newsworthy events, so rarer, more\n> severe problems are a larger perceived problem than the more routine, less\n> severe problems. So I think our image as a reliable database would suffer.\n> >\n> > An ideal resolution from my perspective would be:\n> > 1. Database starts throwing warnings that xid lag has reached severely\n> abnormal levels\n> > 2. Database owning team initiates an effort to correct this, and does\n> not take downtime or degradation of services as a result\n> > 3. People do not get frustrated because this is not a reliability\n> problem anymore.\n> >\n> > Now, 64-big xids are necessary to get us there but they are not\n> sufficient. One needs to fix the way we handle this sort of problem.\n> There is existing logic to warn if we are approaching xid wraparound. This\n> should be changed to check how many xids we have used rather than remaining\n> and have a sensible default there (optionally configurable).\n> >\n> > I agree it is not vacuum's responsibility. It is the responsibility of\n> the current warnings we have to avoid more serious problems arising from\n> this change. These should just be adjusted rather than dropped.\n>\n> I disagree with the axiom that XID wraparound is merely a symptom and\n> not a problem.\n>\n\nXID wraparound doesn't happen to healthy databases, nor does it happen to\ndatabases actively monitoring this possibility. The cases where it\nhappens, two circumstances are present:\n\n1. Autovacuum is stalled, and\n2. Monitoring is not checking for xid lag (which would be fixed by\nautovacuum if it were running properly).\n\nXID wraparound is downstream of those problems. At least that is my\nexperience. If you disagree, I would like to hear why.\n\nAdditionally those problems still will cause worse outages with this change\nunless there are some mitigating measures in place. If you don't like my\nproposal, I would be open to other mitigating measures. But I think there\nneed to be mitigating measures in a change like this.\n\n>\n> Using 32-bit XIDs was a reasonable design decision back when disk\n> space was limited and disks were slow. The drawback of this approach\n> is the need to do the wraparound but agaig back then it was a\n> reasonable design choice. If XIDs were 64-bit from the beginning users\n> could run one billion (1,000,000,000) TPS for 584 years without a\n> wraparound. We wouldn't have it similarly as there is no wraparound\n> for WAL segments. Now when disks are much faster and much cheaper\n> 32-bit XIDs are almost certainly not a good design choice anymore.\n> (Especially considering the fact that this particular patch mitigates\n> the problem of increased disk consumption greatly.)\n>\n\nI agree that 64-bit xids are a good idea. I just don't think that existing\nsafety measures should be ignored or reverted.\n\n>\n> Also I disagree with an argument that a DBA that doesn't monitor disk\n> space would care much about some strange warnings in the logs. If a\n> DBA doesn't monitor basic system metrics I'm afraid we can't help this\n> person much.\n>\n\nThe problem isn't just the lack of disk space, but the difficulty that\nstuck autovacuum runs pose in resolving the issue. Keep in mind that\neverything you can do to reclaim disk space (vacuum full, cluster,\npg_repack) will be significantly slowed down by an extremely bloated\ntable/index comparison. The problem is that if you are running out of disk\nspace, and your time to recovery much longer than expected, then you have a\nmajor problem. It's not just one or the other, but the combination that\nposes the real risk here.\n\nNow that's fine if you want to run a bloatless table engine but to my\nknowledge none of these are production-ready yet. ZHeap seems mostly\nstalled. Oriole is still experimental. But with the current PostgreSQL\ntable structure.\n\nA DBA can monitor disk space, but if the DBA is not also monitoring xid\nlag, then by the time corrective action is taken it may be too late.\n\n\n> I do agree that we could probably provide some additional help for the\n> rest of the users when it comes to configuring VACUUM. This is indeed\n> non-trivial. However I don't think this is in scope of this particular\n> patchset. I suggest we keep the focus in this discussion. If you have\n> a concrete proposal please consider starting a new thread.\n>\n> This at least is my personal opinion. Let's give the rest of the\n> community a chance to share their thoughts.\n>\n\nFair enough. As I say, my proposal that this needs mitigating measures\nhere comes from my experience with xid wraparound and vacuum runs that took\n36hrs+ to run. At present my objection stands, and I hope the committers\ntake that into account.\n\n>\n> --\n> Best regards,\n> Aleksander Alekseevh\n>\n\nOn Tue, Nov 22, 2022 at 10:01 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Chris,\n\n> Right now the way things work is:\n> 1.  Database starts throwing warnings that xid wraparound is approaching\n> 2.  Database-owning team initiates an emergency response, may take downtime or degradation of services as a result\n> 3.  People get frustrated with PostgreSQL because this is a reliability problem.\n>\n> What I am worried about is:\n> 1.  Database is running out of space\n> 2.  Database-owning team initiates an emergency response and takes more downtime to into a good spot\n> 3.  People get frustrated with PostgreSQL because this is a reliability problem.\n>\n> If that's the way we go, I don't think we've solved that much.  And as humans we also bias our judgments towards newsworthy events, so rarer, more severe problems are a larger perceived problem than the more routine, less severe problems.  So I think our image as a reliable database would suffer.\n>\n> An ideal resolution from my perspective would be:\n> 1.  Database starts throwing warnings that xid lag has reached severely abnormal levels\n> 2.  Database owning team initiates an effort to correct this, and does not take downtime or degradation of services as a result\n> 3.  People do not get frustrated because this is not a reliability problem anymore.\n>\n> Now, 64-big xids are necessary to get us there but they are not sufficient.  One needs to fix the way we handle this sort of problem.  There is existing logic to warn if we are approaching xid wraparound.  This should be changed to check how many xids we have used rather than remaining and have a sensible default there (optionally configurable).\n>\n> I agree it is not vacuum's responsibility.  It is the responsibility of the current warnings we have to avoid more serious problems arising from this change.  These should just be adjusted rather than dropped.\n\nI disagree with the axiom that XID wraparound is merely a symptom and\nnot a problem.XID wraparound doesn't happen to healthy databases, nor does it happen to databases actively monitoring this possibility.  The cases where it happens, two circumstances are present:1.  Autovacuum is stalled, and2.  Monitoring is not checking for xid lag (which would be fixed by autovacuum if it were running properly).XID wraparound is downstream of those problems.   At least that is my experience.  If you disagree, I would like to hear why.Additionally those problems still will cause worse outages with this change unless there are some mitigating measures in place.  If you don't like my proposal, I would be open to other mitigating measures.  But I think there need to be mitigating measures in a change like this.\n\nUsing 32-bit XIDs was a reasonable design decision back when disk\nspace was limited and disks were slow. The drawback of this approach\nis the need to do the wraparound but agaig back then it was a\nreasonable design choice. If XIDs were 64-bit from the beginning users\ncould run one billion (1,000,000,000) TPS for 584 years without a\nwraparound. We wouldn't have it similarly as there is no wraparound\nfor WAL segments. Now when disks are much faster and much cheaper\n32-bit XIDs are almost certainly not a good design choice anymore.\n(Especially considering the fact that this particular patch mitigates\nthe problem of increased disk consumption greatly.)I agree that 64-bit xids are a good idea.  I just don't think that existing safety measures should be ignored or reverted. \n\nAlso I disagree with an argument that a DBA that doesn't monitor disk\nspace would care much about some strange warnings in the logs. If a\nDBA doesn't monitor basic system metrics I'm afraid we can't help this\nperson much.The problem isn't just the lack of disk space, but the difficulty that stuck autovacuum runs pose in resolving the issue.  Keep in mind that everything you can do to reclaim disk space (vacuum full, cluster, pg_repack) will be significantly slowed down by an extremely bloated table/index comparison.  The problem is that if you are running out of disk space, and your time to recovery much longer than expected, then you have a major problem.  It's not just one or the other, but the combination that poses the real risk here.Now that's fine if you want to run a bloatless table engine but to my knowledge none of these are production-ready yet.  ZHeap seems mostly stalled.  Oriole is still experimental.  But with the current PostgreSQL table structure.A DBA can monitor disk space, but if the DBA is not also monitoring xid lag, then by the time corrective action is taken it may be too late.\n\nI do agree that we could probably provide some additional help for the\nrest of the users when it comes to configuring VACUUM. This is indeed\nnon-trivial. However I don't think this is in scope of this particular\npatchset. I suggest we keep the focus in this discussion. If you have\na concrete proposal please consider starting a new thread.\n\nThis at least is my personal opinion. Let's give the rest of the\ncommunity a chance to share their thoughts.Fair enough.   As I say, my proposal that this needs mitigating measures here comes from my experience with xid wraparound and vacuum runs that took 36hrs+ to run.    At present my objection stands, and I hope the committers take that into account.\n\n-- \nBest regards,\nAleksander Alekseevh", "msg_date": "Thu, 24 Nov 2022 05:01:33 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Chris,\n\n> XID wraparound doesn't happen to healthy databases\n> If you disagree, I would like to hear why.\n\nConsider the case when you run a slow OLAP query that takes 12h to\ncomplete and 100K TPS of fast OLTP-type queries on the same system.\nThe fast queries will consume all 32-bit XIDs in less than 12 hours,\nwhile the OLAP query started 12 hours ago didn't finish yet and thus\nits tuples can't be frozen.\n\n> I agree that 64-bit xids are a good idea. I just don't think that existing safety measures should be ignored or reverted.\n\nFair enough.\n\n> The problem isn't just the lack of disk space, but the difficulty that stuck autovacuum runs pose in resolving the issue. Keep in mind that everything you can do to reclaim disk space (vacuum full, cluster, pg_repack) will be significantly slowed down by an extremely bloated table/index comparison. The problem is that if you are running out of disk space, and your time to recovery much longer than expected, then you have a major problem. It's not just one or the other, but the combination that poses the real risk here.\n>\n> Now that's fine if you want to run a bloatless table engine but to my knowledge none of these are production-ready yet. ZHeap seems mostly stalled. Oriole is still experimental. But with the current PostgreSQL table structure.\n>\n> A DBA can monitor disk space, but if the DBA is not also monitoring xid lag, then by the time corrective action is taken it may be too late.\n\nGood point but I still don't think this is related to the XID\nwraparound problem.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 24 Nov 2022 11:36:04 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Fri, 21 Oct 2022 at 17:09, Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Reviews and opinions are very welcome!\n\nI'm wondering whether the safest way to handle this is by creating a\nnew TAM called \"heap64\", so that all storage changes happens there.\n(Obviously there are still many other changes in core, but they are\nmore easily fixed).\n\nThat would reduce the code churn around \"heap\", allowing us to keep it\nstable while we move to the brave new world.\n\nMany current users see stability as one of the greatest strengths of\nPostgres, so while I very much support this move, I wonder if this\ngives us a way to have both stability and innovation at the same time?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Nov 2022 17:42:31 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Sun, Nov 20, 2022 at 11:58 PM Chris Travers <chris.travers@gmail.com> wrote:\n> I can start by saying I think it would be helpful (if the other issues are approached reasonably) to have 64-bit xids, but there is an important piece of context in reventing xid wraparounds that seems missing from this patch unless I missed something.\n>\n> XID wraparound is a symptom, not an underlying problem. It usually occurs when autovacuum or other vacuum strategies have unexpected stalls and therefore fail to work as expected. Shifting to 64-bit XIDs dramatically changes the sorts of problems that these stalls are likely to pose to operational teams. -- you can find you are running out of storage rather than facing an imminent database shutdown. Worse, this patch delays the problem until some (possibly far later!) time, when vacuum will take far longer to finish, and options for resolving the problem are diminished. As a result I am concerned that merely changing xids from 32-bit to 64-bit will lead to a smaller number of far more serious outages.\n\nThis is exactly what I think (except perhaps for the part about having\nfewer outages overall). The more transaction ID space you need, the\nmore space you're likely to need in the near future.\n\nWe can all agree that having more runway is better than having less\nrunway, at least in some abstract sense, but that in itself doesn't\nhelp the patch series very much. The first time the system-level\noldestXid (or database level datminfrozenxid) attains an age of 2\nbillion XIDs will usually *also* be the first time it attains an age\nof (say) 300 million XIDs. Even 300 million is usually a huge amount\nof XID space relative to (say) the number of XIDs used every 24 hours.\nSo I know exactly what you mean about just addressing a symptom.\n\nThe whole project seems to just ignore basic, pertinent questions.\nQuestions like: why are we falling behind like this in the first\nplace? And: If we don't catch up soon, why should we be able to catch\nup later on? Falling behind on freezing is still a huge problem with\n64-bit XIDs.\n\nPart of the problem with the current design is that table age has\napproximately zero relationship with the true cost of catching up on\nfreezing -- we are \"using the wrong units\", in a very real sense. In\ngeneral we may have to do zero freezing to advance a table's\nrelfrozenxid age by a billion XIDs, or we might have to write\nterabytes of FREEZE_PAGE records to advance a similar looking table's\nrelfrozenxid by just one single XID (it could also swing wildly over\ntime for the same table). Which the system simply doesn't try to\nreason about right now.\n\nThere are no settings for freezing that use physical units, and frame\nthe problem as a problem of being behind by this many unfrozen pages\n(they are all based on XID age). And so the problem with letting table\nage get into the billions isn't even that we'll never catch up -- we\nactually might catch up very easily! The real problem is that we have\nno way of knowing ahead of time (at least not right now). VACUUM\nshould be managing the debt, *and* the uncertainty about how much debt\nwe're really in. VACUUM needs to have a dynamic, probabilistic\nunderstanding of what's really going on -- something much more\nsophisticated than looking at table age in autovacuum.c.\n\nOne reason why you might want to advance relfrozenxid proactively is\nto give the system a better general sense of the true relationship\nbetween logical XID space and physical freezing for a given table and\nworkload -- it gives a clearer picture about the conditions in the\ntable. The relationship between SLRU space and physical heap pages and\nthe work of freezing is made somewhat clearer by a more proactive\napproach to advancing relfrozenxid. That's one way that VACUUM can\nlower the uncertainty I referred to.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Nov 2022 11:25:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi hackers,\n\n> I'm wondering whether the safest way to handle this is by creating a\n> new TAM called \"heap64\", so that all storage changes happens there.\n\n> Many current users see stability as one of the greatest strengths of\n> Postgres, so while I very much support this move, I wonder if this\n> gives us a way to have both stability and innovation at the same time?\n\nThat would be nice.\n\nHowever from what I see TransactionId is a type used globally in\nPostgresSQL. It is part of structures used by TAM interface, used in\nWAL records, etc. So we will have to learn these components to work\nwith 64-bit XIDs anyway and then start thinking about cases like: when\na user runs a transaction affecting two tables, a heap32 one and\nheap64 one and we will have to figure out which tuples are visible and\nwhich are not. This perhaps is doable but the maintenance burden for\nthe project will be too high IMO.\n\nIt seems to me that the best option we can offer for the users looking\nfor stability is to use the latest PostgreSQL version with 32-bit\nXIDs. Assuming these users care that much about this particular design\nchoice of course.\n\n> The whole project seems to just ignore basic, pertinent questions.\n> Questions like: why are we falling behind like this in the first\n> place? And: If we don't catch up soon, why should we be able to catch\n> up later on? Falling behind on freezing is still a huge problem with\n> 64-bit XIDs.\n\nIs the example I provided above wrong?\n\n\"\"\"\nConsider the case when you run a slow OLAP query that takes 12h to\ncomplete and 100K TPS of fast OLTP-type queries on the same system.\nThe fast queries will consume all 32-bit XIDs in less than 12 hours,\nwhile the OLAP query started 12 hours ago didn't finish yet and thus\nits tuples can't be frozen.\n\"\"\"\n\nIf it is, please let me know. I would very much like to know if my\nunderstanding here is flawed.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 25 Nov 2022 11:37:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi;\n\nTrying to discuss where we are talking past eachother.....\n\nOn Fri, Nov 25, 2022 at 9:38 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> > I'm wondering whether the safest way to handle this is by creating a\n> > new TAM called \"heap64\", so that all storage changes happens there.\n>\n> > Many current users see stability as one of the greatest strengths of\n> > Postgres, so while I very much support this move, I wonder if this\n> > gives us a way to have both stability and innovation at the same time?\n>\n> That would be nice.\n>\n> However from what I see TransactionId is a type used globally in\n> PostgresSQL. It is part of structures used by TAM interface, used in\n> WAL records, etc. So we will have to learn these components to work\n> with 64-bit XIDs anyway and then start thinking about cases like: when\n> a user runs a transaction affecting two tables, a heap32 one and\n> heap64 one and we will have to figure out which tuples are visible and\n> which are not. This perhaps is doable but the maintenance burden for\n> the project will be too high IMO.\n>\n> It seems to me that the best option we can offer for the users looking\n> for stability is to use the latest PostgreSQL version with 32-bit\n> XIDs. Assuming these users care that much about this particular design\n> choice of course.\n>\n\nI didn't see any changes to pg_upgrade to make this change possible on\nupgrade. Is that also outside of the scope of your patch set? If so how\nis that continuity supposed to be ensured?\n\nAlso related to that, I think you would have to have a check on streaming\nreplication that both instances use the same xid format (that you don't\naccidently upgrade this somehow), since this is set per db cluster, right?\n\n>\n> > The whole project seems to just ignore basic, pertinent questions.\n> > Questions like: why are we falling behind like this in the first\n> > place? And: If we don't catch up soon, why should we be able to catch\n> > up later on? Falling behind on freezing is still a huge problem with\n> > 64-bit XIDs.\n>\n> Is the example I provided above wrong?\n>\n> \"\"\"\n> Consider the case when you run a slow OLAP query that takes 12h to\n> complete and 100K TPS of fast OLTP-type queries on the same system.\n> The fast queries will consume all 32-bit XIDs in less than 12 hours,\n> while the OLAP query started 12 hours ago didn't finish yet and thus\n> its tuples can't be frozen.\n> \"\"\"\n>\n> If it is, please let me know. I would very much like to know if my\n> understanding here is flawed.\n>\n\nSo, you have described a scenario we cannot support today (because xids\nwould be exhausted within 5.5 hours at that transactional rate).\nAdditionally as PostgreSQL becomes more capable, this sort of scale will\nincreasingly be within reach and that is an important point in favor of\nthis effort.\n\nThis being said, there is another set of xid wraparound cases which today\nis much larger in number that I think would be hurt if this patchset were\nto be accepted into Postgres without mitigating measures which you consider\nout of bounds -- the cases like Mailchimp, Adjust, and the like. This is\nwhy I keep stressing this, and I don't think waiving away concerns about\nuse cases outside of the one you are focusing on is helpful, particularly\nfrom those of us who have faced xid wraparounds in these cases in the\npast. In these cases, database teams are usually faced with an operational\nemergency while tools like vacuum, pg_repack, etc are severely degraded due\nto getting so far behind on freezing. The deeper the hole, the harder it\nwill be to dig out of.\n\nEvery large-scale high-throughput database I have ever worked on had\nlong-running query alerts precisely because of the impact on vacuum and the\ndownstream performance impacts. I would love to get to a point where this\nwasn't necessary and maybe in a few specific workloads we might be there\nvery soon. The effort you are engaging in here is an important part of the\npath to get there, but let's not forget the people who today are facing xid\nwraparounds due to vacuum problems and what this sort of set of changes\nwill mean for them.\n\n-- \n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\nHi;Trying to discuss where we are talking past eachother.....On Fri, Nov 25, 2022 at 9:38 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> I'm wondering whether the safest way to handle this is by creating a\n> new TAM called \"heap64\", so that all storage changes happens there.\n\n> Many current users see stability as one of the greatest strengths of\n> Postgres, so while I very much support this move, I wonder if this\n> gives us a way to have both stability and innovation at the same time?\n\nThat would be nice.\n\nHowever from what I see TransactionId is a type used globally in\nPostgresSQL. It is part of structures used by TAM interface, used in\nWAL records, etc. So we will have to learn these components to work\nwith 64-bit XIDs anyway and then start thinking about cases like: when\na user runs a transaction affecting two tables, a heap32 one and\nheap64 one and we will have to figure out which tuples are visible and\nwhich are not. This perhaps is doable but the maintenance burden for\nthe project will be too high IMO.\n\nIt seems to me that the best option we can offer for the users looking\nfor stability is to use the latest PostgreSQL version with 32-bit\nXIDs. Assuming these users care that much about this particular design\nchoice of course.I didn't see any changes to pg_upgrade to make this change possible on upgrade.  Is that also outside of the scope of your patch set?  If so how is that continuity supposed to be ensured?Also related to that, I think you would have to have a check on streaming replication that both instances use the same xid format (that you don't accidently upgrade this somehow), since this is set per db cluster, right? \n\n> The whole project seems to just ignore basic, pertinent questions.\n> Questions like: why are we falling behind like this in the first\n> place? And: If we don't catch up soon, why should we be able to catch\n> up later on? Falling behind on freezing is still a huge problem with\n> 64-bit XIDs.\n\nIs the example I provided above wrong?\n\n\"\"\"\nConsider the case when you run a slow OLAP query that takes 12h to\ncomplete and 100K TPS of fast OLTP-type queries on the same system.\nThe fast queries will consume all 32-bit XIDs in less than 12 hours,\nwhile the OLAP query started 12 hours ago didn't finish yet and thus\nits tuples can't be frozen.\n\"\"\"\n\nIf it is, please let me know. I would very much like to know if my\nunderstanding here is flawed.So, you have described a scenario we cannot support today (because xids would be exhausted within 5.5 hours at that transactional rate).  Additionally as PostgreSQL becomes more capable, this sort of scale will increasingly be within reach and that is an important point in favor of this effort.This being said, there is another set of xid wraparound cases which today is much larger in number that I think would be hurt if this patchset were to be accepted into Postgres without mitigating measures which you consider out of bounds -- the cases like Mailchimp, Adjust, and the like.  This is why I keep stressing this, and I don't think waiving away concerns about use cases outside of the one you are focusing on is helpful, particularly from those of us who have faced xid wraparounds in these cases in the past.  In these cases, database teams are usually faced with an operational emergency while tools like vacuum, pg_repack, etc are severely degraded due to getting so far behind on freezing.  The deeper the hole, the harder it will be to dig out of.Every large-scale high-throughput database I have ever worked on had long-running query alerts precisely because of the impact on vacuum and the downstream performance impacts.   I would love to get to a point where this wasn't necessary and maybe in a few specific workloads we might be there very soon.  The effort you are engaging in here is an important part of the path to get there, but let's not forget the people who today are facing xid wraparounds due to vacuum problems and what this sort of set of changes will mean for them.\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Sat, 26 Nov 2022 10:09:25 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nThere are quite some time after previous update.\nHere is a rebased version of the patch set.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 28 Nov 2022 19:08:54 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Sat, Nov 26, 2022 at 4:08 AM Chris Travers <chris@orioledata.com> wrote:\n> I didn't see any changes to pg_upgrade to make this change possible on upgrade. Is that also outside of the scope of your patch set? If so how is that continuity supposed to be ensured?\n\nThe scheme is documented in their 0006 patch, in a README.XID file.\nI'm not entirely confident that it's the best design and have argued\nagainst it in the past, but it's not crazy.\n\nMore generally, while I think there's plenty of stuff to be concerned\nabout in this patch set and while I'm somewhat skeptical about the\nlikelihood of its getting or staying committed, I can't really\nunderstand your concerns in particular. The thrust of your concern\nseems to be that if we allow people to get further behind, recovery\nwill be more difficult. I'm not sure I see the problem. Suppose that\nwe adopt this proposal and that it is bug-free. Now, consider a user\nwho gets 8 billion XIDs behind. They probably have to vacuum pretty\nmuch every page in the database to do that, or least every page in the\ntables that haven't been vacuumed recently. But that would likely also\nbe true if they were 800 million XIDs behind, as is possible today.\nThe effort to catch up doesn't increase linearly with how far behind\nyou are, and is always bounded by the DB size.\n\nIt is true that if the table is progressively bloating, it is likely\nto be more bloated by the time you are 8 billion XIDs behind than it\nwas when you were 800 million XIDs behind. I don't see that as a very\ngood reason not to adopt this patch, because you can bloat the table\nby an arbitrarily large amount while consuming only a small number of\nXiDs, even just 1 XID. Protecting against bloat is good, but shutting\ndown the database when the XID age reaches a certain value is not a\nparticularly effective way of doing that, so saying that we'll be\nhurting people by not shutting down the database at the point where we\ndo so today doesn't ring true to me. I think that most people who get\nto the point of wraparound shutdown have workloads where bloat isn't a\nhuge issue, because those who do start having problems with the bloat\nway before they run out of XIDs.\n\nIt would be entirely possible to add a parameter to the system that\nsays \"hey, you know we can keep running even if we're a shazillion\nXiDs behind, but instead shut down when we are behind by this number\nof XIDs.\" Then, if somebody wants to force an automatic shutdown at\nthat point, they could, and I think that then the scenario you're\nworried about just can't happen any more . But isn't that a little bit\nsilly? You could also just monitor how far behind you are and page the\nDBA when you get behind by more than a certain number of XIDs. Then,\nyou wouldn't be risking a shutdown, and you'd still be able to stay on\ntop of the XID ages of your tables.\n\nPhilosophically, I disagree with the idea of shutting down the\ndatabase completely in any situation in which a reasonable alternative\nexists. Losing read and write availability is really bad, and I don't\nthink it's what users want. I think that most users want the database\nto degrade gracefully when things do not go according to plan.\nIdeally, they'd like everything to Just Work, but reasonable users\nunderstand that sometimes there are going to be problems, and in my\nexperience, what makes them happy is when the database acts to contain\nthe scope of the problem so that it affects their workload as little\nas possible, rather than acting to magnify the problem so that it\nimpacts their workload as much as possible. This patch, implementation\nand design concerns to one side, does that.\n\nI don't believe there's a single right answer to the question of what\nto do about vacuum falling behind, and I think it's worth exploring\nmultiple avenues to improve the situation. You can have vacuum never\nrun on a table at all, say because all of the workers are busy\nelsewhere, or because the table is locked until the heat death of the\nuniverse. You can have vacuum run on a table but too slowly to do any\ngood, because of the vacuum cost delay mechanism. You can have vacuum\nrun and finish but do little good because of prepared transactions or\nreplication slots or long-running queries. It's reasonable to think\nabout what kinds of steps might help in those different scenarios, and\nespecially to think about what kind of steps might help in multiple\ncases. We should do that. But, I don't think any of that means that we\ncan ignore the need for some kind of expansion of the XID space\nforever. Computers are getting faster. It's already possible to burn\nthrough the XID space in hours, and the number of hours is going to go\ndown over time and maybe eventually the right unit will be minutes, or\neven seconds. Sometime before then, we need to do something to make\nthe runway bigger, or else just give up on PostgreSQL being a relevant\npiece of software.\n\nPerhaps the thing we need to do is not exactly this, but if not, it's\nprobably a sibling or cousin of this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 11:52:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 8:53 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It is true that if the table is progressively bloating, it is likely\n> to be more bloated by the time you are 8 billion XIDs behind than it\n> was when you were 800 million XIDs behind. I don't see that as a very\n> good reason not to adopt this patch, because you can bloat the table\n> by an arbitrarily large amount while consuming only a small number of\n> XiDs, even just 1 XID. Protecting against bloat is good, but shutting\n> down the database when the XID age reaches a certain value is not a\n> particularly effective way of doing that, so saying that we'll be\n> hurting people by not shutting down the database at the point where we\n> do so today doesn't ring true to me.\n\nI can't speak for Chris, but I think that almost everybody will agree\non this much, without really having to think about it. It's easy to\nsee that having more XID space is, in general, strictly a good thing.\nIf there was a low risk way of getting that benefit, then I'd be in\nfavor of it.\n\nHere's the problem that I see with this patch: I don't think that the\nrisks are commensurate with the benefits. I can imagine being in favor\nof an even more invasive patch that (say) totally removes the concept\nof freezing, but that would have to be a very different sort of\ndesign.\n\n> Philosophically, I disagree with the idea of shutting down the\n> database completely in any situation in which a reasonable alternative\n> exists. Losing read and write availability is really bad, and I don't\n> think it's what users want.\n\nAt a certain point it may make more sense to activate XidStopLimit\nprotections (which will only prevent new XID allocations) instead of\ngetting further behind on freezing, even in a world where we're never\nstrictly obligated to activate XidStopLimit. It may in fact be the\nlesser evil, even with 64-bit XIDs -- because we still have to freeze,\nand the factors that drive when and how we freeze mostly aren't\nchanged.\n\nFundamentally, when we're falling behind on freezing, at a certain\npoint we can expect to keep falling behind -- unless some kind of\nmajor shift happens. That's just how freezing works, with or without\n64-bit XIDs/MXIDs. If VACUUM isn't keeping up with the allocation of\ntransactions, then the system is probably misconfigured in some way.\nWe should do our best to signal this as early and as frequently as\npossible, and we should mitigate specific hazards (e.g. old temp\ntables) if at all possible. We should activate the failsafe when\nthings really start to look dicey (which, incidentally, the patch just\nremoves). These mitigations may be very effective, but in the final\nanalysis they don't address the fundamental weakness in freezing.\n\nGranted, the specifics of the current XidStopLimit mechanism are\nunlikely to directly carry over to 64-bit XIDs. XidStopLimit is\nstructured in a way that doesn't actually consider freeze debt in\nunits like unfrozen pages. Like Chris, I just don't see why the patch\nobviates the need for something like XidStopLimit, since the patch\ndoesn't remove freezing. An improved XidStopLimit mechanism might even\nend up kicking in *before* the oldest relfrozenxid reached 2 billion\nXIDs, depending on the specifics.\n\nRemoving the failsafe mechanism seems misguided to me for similar\nreasons. I recently learned that Amazon RDS has set a *lower*\nvacuum_failsafe_age default than the standard default (its default of\n1.6 billion to only 1.2 billion on RDS). This decision predates my\njoining AWS. It seems as if practical experience has shown that\nallowing any table's age(relfrozenxid) to get too far past a billion\nis not a good idea. At least it's not a good idea on modern Postgres\nversions, that have the freeze map.\n\nWe really shouldn't have to rely on having billions of XIDs available\nin the first place -- XID space isn't really a fungible commodity.\nIt's much more important to recognize that something (some specific\nthing) just isn't working as designed, which in general could be\npretty far removed from freezing. For example, index corruption could\ndo it (at least without the failsafe). Some kind of autovacuum\nstarvation could do it. It's almost always more complicated than \"not\nenough available XID space\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Nov 2022 13:09:28 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 4:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Granted, the specifics of the current XidStopLimit mechanism are\n> unlikely to directly carry over to 64-bit XIDs. XidStopLimit is\n> structured in a way that doesn't actually consider freeze debt in\n> units like unfrozen pages. Like Chris, I just don't see why the patch\n> obviates the need for something like XidStopLimit, since the patch\n> doesn't remove freezing. An improved XidStopLimit mechanism might even\n> end up kicking in *before* the oldest relfrozenxid reached 2 billion\n> XIDs, depending on the specifics.\n\nWhat is the purpose of using 64-bit XIDs, if not to avoid having to\nstop the world when we run short of XIDs?\n\nI'd say that if this patch, or any patch with broadly similar goals,\nfails to remove xidStopLimit, it might as well not exist.\n\nxidStopLimit is not a general defense against falling too far behind\non freezing, and in general, there is no reason to think that we need\nsuch a defense. xidStopLimit is a defense against reusing the same XID\nand thus causing irreversible database corruption. When that\npossibility no longer exists, it has outlived its usefulness and we\nshould be absolutely delighted to bid it adieu.\n\nIt seems like you and Chris are proposing the moral equivalent of\npaying off your home loan but still sending a check to the mortgage\ncompany every month just to be sure they don't get mad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 16:30:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 04:30:22PM -0500, Robert Haas wrote:\n> What is the purpose of using 64-bit XIDs, if not to avoid having to\n> stop the world when we run short of XIDs?\n> \n> I'd say that if this patch, or any patch with broadly similar goals,\n> fails to remove xidStopLimit, it might as well not exist.\n> \n> xidStopLimit is not a general defense against falling too far behind\n> on freezing, and in general, there is no reason to think that we need\n> such a defense. xidStopLimit is a defense against reusing the same XID\n> and thus causing irreversible database corruption. When that\n> possibility no longer exists, it has outlived its usefulness and we\n> should be absolutely delighted to bid it adieu.\n> \n> It seems like you and Chris are proposing the moral equivalent of\n> paying off your home loan but still sending a check to the mortgage\n> company every month just to be sure they don't get mad.\n\nI think the problem is that we still have bloat with 64-bit XIDs,\nspecifically pg_xact and pg_multixact files. Yes, that bloat is less\nserious, but it is still an issue worth reporting in the server logs,\nthough not serious enough to stop the server from write queries.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Mon, 28 Nov 2022 16:52:27 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> What is the purpose of using 64-bit XIDs, if not to avoid having to\n> stop the world when we run short of XIDs?\n\nI agree that the xidStopLimit mechanism was designed with the specific\ngoal of preventing \"true\" physical XID wraparound that results in\nwrong answers to queries. It was not written with the intention of\nlimiting the accumulation of freeze debt, which would have to use\nunits like unfrozen heap pages to make any sense. That is a historic\nfact -- no question.\n\nI think that it is nevertheless quite possible that just refusing to\nallocate any more XIDs could make sense with 64-bit XIDs, where we\ndon't strictly have to to keep the system fully functional. That might\nstill be the lesser evil, in that other world. The cutoff itself would\ndepend on many workload details, I suppose.\n\nImagine if we actually had 64-bit XIDs -- let's assume for a moment\nthat it's a done deal. This raises a somewhat awkward question: do you\njust let the system get further and further behind on freezing,\nforever? We can all agree that 2 billion XIDs is very likely the wrong\ntime to start refusing new XIDs -- because it just isn't designed with\ndebt in mind. But what's the right time, if any? How much debt is too\nmuch?\n\nAt the very least these seem like questions that deserve serious consideration.\n\n> I'd say that if this patch, or any patch with broadly similar goals,\n> fails to remove xidStopLimit, it might as well not exist.\n\nWell, it could in principle be driven by lots of different kinds of\ninformation, and make better decisions by actually targeting freeze\ndebt in some way. An \"enhanced version of xidStopLimit with 64-bit\nXIDs\" could kick in far far later than it currently would. Obviously\nthat has some value.\n\nI'm not claiming to know how to build this \"better xidStopLimit\nmechanism\", by the way. I'm not seriously proposing it. Mostly I'm\njust saying that the question \"where do you draw the line if not at 2\nbillion XIDs?\" is a very pertinent question. It is not a question that\nis made any less valid by the fact that we already know that 2 billion\nXIDs is pretty far from optimal in almost all cases. Having some limit\nbased on something is likely to be more effective than having no limit\nbased on nothing.\n\nAdmittedly this argument works a lot better with the failsafe than it\ndoes with xidStopLimit. Both are removed by the patch.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Nov 2022 13:52:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I think the problem is that we still have bloat with 64-bit XIDs,\n> specifically pg_xact and pg_multixact files. Yes, that bloat is less\n> serious, but it is still an issue worth reporting in the server logs,\n> though not serious enough to stop the server from write queries.\n\nThat's definitely a big part of it.\n\nAgain, I don't believe that the idea is fundamentally without merit.\nJust that it's not worth it, given that having more XID space is very\nmuch not something that I think fixes most of the problems. And given\nthe real risk of serious bugs with something this invasive.\n\nI believe that it would be more useful to focus on just not getting\ninto trouble in the first place, as well as on mitigating specific\nproblems that lead to the system reaching xidStopLimit in practice. I\ndon't think that there is any good reason to allow datfrozenxid to go\npast about a billion. When it does the interesting questions are\nquestions about what went wrong, and how that specific failure can be\nmitigated in a fairly direct way.\n\nWe've already used way to much \"XID space runway\", so why should using\neven more help? It might, I suppose, but it almost seems like a\ndistraction to me, as somebody that wants to make things better for\nusers in general. As long as the system continues to misbehave (in\nwhatever way it happens to be misbehaving), why should any amount of\nXID space ever be enough?\n\nI think that we'll be able to get rid of freezing in a few years time.\nBut as long as we have freezing, we have these problems.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Nov 2022 14:06:06 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm not claiming to know how to build this \"better xidStopLimit\n> mechanism\", by the way. I'm not seriously proposing it. Mostly I'm\n> just saying that the question \"where do you draw the line if not at 2\n> billion XIDs?\" is a very pertinent question. It is not a question that\n> is made any less valid by the fact that we already know that 2 billion\n> XIDs is pretty far from optimal in almost all cases. Having some limit\n> based on something is likely to be more effective than having no limit\n> based on nothing.\n>\n> Admittedly this argument works a lot better with the failsafe than it\n> does with xidStopLimit. Both are removed by the patch.\n\nCome to think of it, if you were going to do something like this it\nwould probably work by throttling XID allocations, with a gradual\nramp-up. It would rarely get to the point that the system refused to\nallocate XIDs completely.\n\nIt's not fundamentally unreasonable to force the application to live\nwithin its means by throttling. Feedback that slows down the rate of\nwrites is much more common in the LSM tree world, within systems like\nMyRocks [1], where the risk of the system being destabilized by debt\nis more pressing.\n\nAs I said, I don't think that this is a particularly promising way of\naddressing problems with Postgres XID space exhaustion, since I\nbelieve that the underlying issue isn't usually a simple lack of \"XID\nspace slack capacity\". But if you assume that I'm wrong here (if you\nassume that we very often don't have the ability to freeze lazily\nenough), then ISTM that throttling or feedback to stall new writes is\na very reasonable option. In fact, it's practically mandatory.\n\n[1] https://docs.google.com/presentation/d/1WgP-SlKay5AnSoVDSvOIzmu7edMmtYhdywoa0oAR4JQ/edit#slide=id.g8839c9d71b_0_79\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Nov 2022 15:15:01 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi;\n\nI suppose I must not have been clear in what I am suggesting we do and\nwhy. I will try to answer specific points below and then restate what I\nthink the problem is, and what I think should be done about it.\n\nOn Mon, Nov 28, 2022 at 5:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Nov 26, 2022 at 4:08 AM Chris Travers <chris@orioledata.com>\n> wrote:\n> > I didn't see any changes to pg_upgrade to make this change possible on\n> upgrade. Is that also outside of the scope of your patch set? If so how\n> is that continuity supposed to be ensured?\n>\n> The scheme is documented in their 0006 patch, in a README.XID file.\n> I'm not entirely confident that it's the best design and have argued\n> against it in the past, but it's not crazy.\n>\n\nRight. Per previous discussion I thought there was some discussion of\nallowing people to run with the existing behavior. I must have been\nmistaken. If that is off the table then pg_upgrade and runtime replication\nchecks don't matter.\n\n>\n> More generally, while I think there's plenty of stuff to be concerned\n> about in this patch set and while I'm somewhat skeptical about the\n> likelihood of its getting or staying committed, I can't really\n> understand your concerns in particular. The thrust of your concern\n> seems to be that if we allow people to get further behind, recovery\n> will be more difficult. I'm not sure I see the problem. Suppose that\n> we adopt this proposal and that it is bug-free. Now, consider a user\n> who gets 8 billion XIDs behind. They probably have to vacuum pretty\n> much every page in the database to do that, or least every page in the\n> tables that haven't been vacuumed recently. But that would likely also\n> be true if they were 800 million XIDs behind, as is possible today.\n> The effort to catch up doesn't increase linearly with how far behind\n> you are, and is always bounded by the DB size.\n>\n\nRight. I agree with all of that.\n\n>\n> It is true that if the table is progressively bloating, it is likely\n> to be more bloated by the time you are 8 billion XIDs behind than it\n> was when you were 800 million XIDs behind. I don't see that as a very\n> good reason not to adopt this patch, because you can bloat the table\n> by an arbitrarily large amount while consuming only a small number of\n> XiDs, even just 1 XID. Protecting against bloat is good, but shutting\n> down the database when the XID age reaches a certain value is not a\n> particularly effective way of doing that, so saying that we'll be\n> hurting people by not shutting down the database at the point where we\n> do so today doesn't ring true to me. I think that most people who get\n> to the point of wraparound shutdown have workloads where bloat isn't a\n> huge issue, because those who do start having problems with the bloat\n> way before they run out of XIDs.\n>\n\nTo be clear, I never suggested shutting down the database. What I have\nsuggested is that repurposing the current approaching-xid-wraparound\nwarnings to start complaining loudly when a threshold is exceeded would be\nhelpful. I think it makes sense to make that threshold configurable\nespecially if we eventually have people running bloat-free table structures.\n\nThere are two fundamental problems here. The first is that if, as you say,\na table is progressively bloating and we are getting further and further\nbehind on vacuuming and freezing, something is seriously wrong and we need\nto do something about it. In these cases, I my experience is that\nvacuuming and related tools tend to suffer degraded performance, and\ndetermining how to solve the problem takes quite a bit more time than a\nroutine bloat issue would. So what I am arguing against is treating the\nproblem just as a bloat issue. If you get there due to vacuum being slow,\nsomething else is wrong and you are probably going to have to find and fix\nthat as well in order to catch up. At least that's my experience.\n\nI don't object to the db continuing to run, allocate xids etc. What I\nobject to is it doing so in silently where things are almost certainly\ngoing very wrong.\n\n>\n> It would be entirely possible to add a parameter to the system that\n> says \"hey, you know we can keep running even if we're a shazillion\n> XiDs behind, but instead shut down when we are behind by this number\n> of XIDs.\" Then, if somebody wants to force an automatic shutdown at\n> that point, they could, and I think that then the scenario you're\n> worried about just can't happen any more . But isn't that a little bit\n> silly? You could also just monitor how far behind you are and page the\n> DBA when you get behind by more than a certain number of XIDs. Then,\n> you wouldn't be risking a shutdown, and you'd still be able to stay on\n> top of the XID ages of your tables.\n>\n> Philosophically, I disagree with the idea of shutting down the\n> database completely in any situation in which a reasonable alternative\n> exists. Losing read and write availability is really bad, and I don't\n> think it's what users want. I think that most users want the database\n> to degrade gracefully when things do not go according to plan.\n> Ideally, they'd like everything to Just Work, but reasonable users\n> understand that sometimes there are going to be problems, and in my\n> experience, what makes them happy is when the database acts to contain\n> the scope of the problem so that it affects their workload as little\n> as possible, rather than acting to magnify the problem so that it\n> impacts their workload as much as possible. This patch, implementation\n> and design concerns to one side, does that.\n>\n> I don't believe there's a single right answer to the question of what\n> to do about vacuum falling behind, and I think it's worth exploring\n> multiple avenues to improve the situation. You can have vacuum never\n> run on a table at all, say because all of the workers are busy\n> elsewhere, or because the table is locked until the heat death of the\n> universe. You can have vacuum run on a table but too slowly to do any\n> good, because of the vacuum cost delay mechanism. You can have vacuum\n> run and finish but do little good because of prepared transactions or\n> replication slots or long-running queries. It's reasonable to think\n> about what kinds of steps might help in those different scenarios, and\n> especially to think about what kind of steps might help in multiple\n> cases. We should do that. But, I don't think any of that means that we\n> can ignore the need for some kind of expansion of the XID space\n> forever. Computers are getting faster. It's already possible to burn\n> through the XID space in hours, and the number of hours is going to go\n> down over time and maybe eventually the right unit will be minutes, or\n> even seconds. Sometime before then, we need to do something to make\n> the runway bigger, or else just give up on PostgreSQL being a relevant\n> piece of software.\n>\n> Perhaps the thing we need to do is not exactly this, but if not, it's\n> probably a sibling or cousin of this.\n>\n\nTo be clear, I am not opposed to doing this. I just think there is a small\nmissing piece which would avoid operational nightmares.\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nHi;I suppose I must not have been clear in what I am suggesting we do and why.   I will try to answer specific points below and then restate what I think the problem is, and what I think should be done about it.On Mon, Nov 28, 2022 at 5:53 PM Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Nov 26, 2022 at 4:08 AM Chris Travers <chris@orioledata.com> wrote:\n> I didn't see any changes to pg_upgrade to make this change possible on upgrade.  Is that also outside of the scope of your patch set?  If so how is that continuity supposed to be ensured?\n\nThe scheme is documented in their 0006 patch, in a README.XID file.\nI'm not entirely confident that it's the best design and have argued\nagainst it in the past, but it's not crazy.Right.  Per previous discussion I thought there was some discussion of allowing people to run with the existing behavior.    I must have been mistaken.  If that is off the table then pg_upgrade and runtime replication checks don't matter. \n\nMore generally, while I think there's plenty of stuff to be concerned\nabout in this patch set and while I'm somewhat skeptical about the\nlikelihood of its getting or staying committed, I can't really\nunderstand your concerns in particular. The thrust of your concern\nseems to be that if we allow people to get further behind, recovery\nwill be more difficult. I'm not sure I see the problem. Suppose that\nwe adopt this proposal and that it is bug-free. Now, consider a user\nwho gets 8 billion XIDs behind. They probably have to vacuum pretty\nmuch every page in the database to do that, or least every page in the\ntables that haven't been vacuumed recently. But that would likely also\nbe true if they were 800 million XIDs behind, as is possible today.\nThe effort to catch up doesn't increase linearly with how far behind\nyou are, and is always bounded by the DB size.Right.  I agree with all of that. \n\nIt is true that if the table is progressively bloating, it is likely\nto be more bloated by the time you are 8 billion XIDs behind than it\nwas when you were 800 million XIDs behind. I don't see that as a very\ngood reason not to adopt this patch, because you can bloat the table\nby an arbitrarily large amount while consuming only a small number of\nXiDs, even just 1 XID. Protecting against bloat is good, but shutting\ndown the database when the XID age reaches a certain value is not a\nparticularly effective way of doing that, so saying that we'll be\nhurting people by not shutting down the database at the point where we\ndo so today doesn't ring true to me. I think that most people who get\nto the point of wraparound shutdown have workloads where bloat isn't a\nhuge issue, because those who do start having problems with the bloat\nway before they run out of XIDs.To be clear, I never suggested shutting down the database.  What I have suggested is that repurposing the current approaching-xid-wraparound warnings to start complaining loudly when a threshold is exceeded would be helpful.   I think it makes sense to make that threshold configurable especially if we eventually have people running bloat-free table structures.There are two fundamental problems here.  The first is that if, as you say, a table is progressively bloating and we are getting further and further behind on vacuuming and freezing, something is seriously wrong and we need to do something about it.  In these cases, I my experience is that vacuuming and related tools tend to suffer degraded performance, and determining how to solve the problem takes quite a bit more time than a routine bloat issue would.  So what I am arguing against is treating the problem just as a bloat issue.  If you get there due to vacuum being slow, something else is wrong and you are probably going to have to find and fix that as well in order to catch up.  At least that's my experience.I don't object to the db continuing to run, allocate xids etc.  What I object to is it doing so in silently where things are almost certainly going very wrong.\n\nIt would be entirely possible to add a parameter to the system that\nsays \"hey, you know we can keep running even if we're a shazillion\nXiDs behind, but instead shut down when we are behind by this number\nof XIDs.\" Then, if somebody wants to force an automatic shutdown at\nthat point, they could, and I think that then the scenario you're\nworried about just can't happen any more . But isn't that a little bit\nsilly? You could also just monitor how far behind you are and page the\nDBA when you get behind by more than a certain number of XIDs. Then,\nyou wouldn't be risking a shutdown, and you'd still be able to stay on\ntop of the XID ages of your tables.\n\nPhilosophically, I disagree with the idea of shutting down the\ndatabase completely in any situation in which a reasonable alternative\nexists. Losing read and write availability is really bad, and I don't\nthink it's what users want. I think that most users want the database\nto degrade gracefully when things do not go according to plan.\nIdeally, they'd like everything to Just Work, but reasonable users\nunderstand that sometimes there are going to be problems, and in my\nexperience, what makes them happy is when the database acts to contain\nthe scope of the problem so that it affects their workload as little\nas possible, rather than acting to magnify the problem so that it\nimpacts their workload as much as possible. This patch, implementation\nand design concerns to one side, does that.\n\nI don't believe there's a single right answer to the question of what\nto do about vacuum falling behind, and I think it's worth exploring\nmultiple avenues to improve the situation. You can have vacuum never\nrun on a table at all, say because all of the workers are busy\nelsewhere, or because the table is locked until the heat death of the\nuniverse. You can have vacuum run on a table but too slowly to do any\ngood, because of the vacuum cost delay mechanism. You can have vacuum\nrun and finish but do little good because of prepared transactions or\nreplication slots or long-running queries. It's reasonable to think\nabout what kinds of steps might help in those different scenarios, and\nespecially to think about what kind of steps might help in multiple\ncases. We should do that. But, I don't think any of that means that we\ncan ignore the need for some kind of expansion of the XID space\nforever. Computers are getting faster. It's already possible to burn\nthrough the XID space in hours, and the number of hours is going to go\ndown over time and maybe eventually the right unit will be minutes, or\neven seconds. Sometime before then, we need to do something to make\nthe runway bigger, or else just give up on PostgreSQL being a relevant\npiece of software.\n\nPerhaps the thing we need to do is not exactly this, but if not, it's\nprobably a sibling or cousin of this.To be clear, I am not opposed to doing this.  I just think there is a small missing piece which would avoid operational nightmares.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Nov 2022 14:05:19 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 11:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Nov 28, 2022 at 1:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think the problem is that we still have bloat with 64-bit XIDs,\n> > specifically pg_xact and pg_multixact files. Yes, that bloat is less\n> > serious, but it is still an issue worth reporting in the server logs,\n> > though not serious enough to stop the server from write queries.\n>\n> That's definitely a big part of it.\n>\n> Again, I don't believe that the idea is fundamentally without merit.\n> Just that it's not worth it, given that having more XID space is very\n> much not something that I think fixes most of the problems. And given\n> the real risk of serious bugs with something this invasive.\n\n\n> I believe that it would be more useful to focus on just not getting\n> into trouble in the first place, as well as on mitigating specific\n> problems that lead to the system reaching xidStopLimit in practice. I\n> don't think that there is any good reason to allow datfrozenxid to go\n> past about a billion. When it does the interesting questions are\n> questions about what went wrong, and how that specific failure can be\n> mitigated in a fairly direct way.\n>\n> We've already used way to much \"XID space runway\", so why should using\n> even more help? It might, I suppose, but it almost seems like a\n> distraction to me, as somebody that wants to make things better for\n> users in general. As long as the system continues to misbehave (in\n> whatever way it happens to be misbehaving), why should any amount of\n> XID space ever be enough?\n>\n\nSo I think the problem is that PostgreSQL is becoming more and more\nscalabile, hardware is becoming more capable, and certain use cases are\ncontinuing to scale up. Over time, we tend to find ourselves approaching\nthe end of the runway at ever higher velocities. That's a problem that\nwill get significantly worse over time.\n\nOf course, as I think we agree, the priorities should be (in order):\n1. Avoid trouble\n2. Recover from trouble early\n3. Provide more and better options for recovery.\n\nI think 64bit xids are a very good idea, but they really fit in this bottom\ntier. Not being up against mathematical limits to the software when\nthings are going bad is certainly a good thing. But I am really worried\nabout the attitude that this patch really avoids trouble because in many\ncases, I don;t think it does and therefore I believe we need to make sure\nwe are not reducing visibility of underlying problems.\n\n>\n> I think that we'll be able to get rid of freezing in a few years time.\n> But as long as we have freezing, we have these problems.\n>\n> --\n> Peter Geoghegan\n>\n\nOn Mon, Nov 28, 2022 at 11:06 PM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Nov 28, 2022 at 1:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I think the problem is that we still have bloat with 64-bit XIDs,\n> specifically pg_xact and pg_multixact files.  Yes, that bloat is less\n> serious, but it is still an issue worth reporting in the server logs,\n> though not serious enough to stop the server from write queries.\n\nThat's definitely a big part of it.\n\nAgain, I don't believe that the idea is fundamentally without merit.\nJust that it's not worth it, given that having more XID space is very\nmuch not something that I think fixes most of the problems. And given\nthe real risk of serious bugs with something this invasive.\n\nI believe that it would be more useful to focus on just not getting\ninto trouble in the first place, as well as on mitigating specific\nproblems that lead to the system reaching xidStopLimit in practice. I\ndon't think that there is any good reason to allow datfrozenxid to go\npast about a billion. When it does the interesting questions are\nquestions about what went wrong, and how that specific failure can be\nmitigated in a fairly direct way.\n\nWe've already used way to much \"XID space runway\", so why should using\neven more help? It might, I suppose, but it almost seems like a\ndistraction to me, as somebody that wants to make things better for\nusers in general. As long as the system continues to misbehave (in\nwhatever way it happens to be misbehaving), why should any amount of\nXID space ever be enough?So I think the problem is that PostgreSQL is becoming more and more scalabile, hardware is becoming more capable, and certain use cases are continuing to scale up.  Over time, we tend to find ourselves approaching the end of the runway at ever higher velocities.  That's a problem that will get significantly worse over time.Of course, as I think we agree, the priorities should be (in order):1.  Avoid trouble2.  Recover from trouble early3.  Provide more and better options for recovery.I think 64bit xids are a very good idea, but they really fit in this bottom tier.   Not being up against mathematical limits to the software when things are going bad is certainly a good thing.  But I am really worried about the attitude that this patch really avoids trouble because in many cases, I don;t think it does and therefore I believe we need to make sure we are not reducing visibility of underlying problems.\n\nI think that we'll be able to get rid of freezing in a few years time.\nBut as long as we have freezing, we have these problems.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 29 Nov 2022 14:35:20 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 29, 2022 at 02:35:20PM +0100, Chris Travers wrote:\n> So I think the problem is that PostgreSQL is becoming more and more scalabile,\n> hardware is becoming more capable, and certain use cases are continuing to\n> scale up.  Over time, we tend to find ourselves approaching the end of the\n> runway at ever higher velocities.  That's a problem that will get significantly\n> worse over time.\n> \n> Of course, as I think we agree, the priorities should be (in order):\n> 1.  Avoid trouble\n> 2.  Recover from trouble early\n> 3.  Provide more and better options for recovery.\n\nWarn about trouble is another area we should focus on here.\n\n> I think 64bit xids are a very good idea, but they really fit in this bottom\n> tier.   Not being up against mathematical limits to the software when things\n> are going bad is certainly a good thing.  But I am really worried about the\n> attitude that this patch really avoids trouble because in many cases, I don;t\n> think it does and therefore I believe we need to make sure we are not reducing\n> visibility of underlying problems.\n\nAs far as I know, all our freeze values are focused on avoiding XID\nwraparound. If XID wraparound is no longer an issue, we might find that\nour freeze limits can be much higher than they are now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 29 Nov 2022 09:46:05 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On 11/29/22 09:46, Bruce Momjian wrote:\n> As far as I know, all our freeze values are focused on avoiding XID\n> wraparound. If XID wraparound is no longer an issue, we might find that\n> our freeze limits can be much higher than they are now.\n> \n\nI'd be careful in that direction as the values together with maintenance \nwork mem also keep a lid on excessive index cleanup rounds.\n\n\nRegards, Jan\n\n\n", "msg_date": "Tue, 29 Nov 2022 10:19:56 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, Nov 28, 2022 at 4:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Imagine if we actually had 64-bit XIDs -- let's assume for a moment\n> that it's a done deal. This raises a somewhat awkward question: do you\n> just let the system get further and further behind on freezing,\n> forever? We can all agree that 2 billion XIDs is very likely the wrong\n> time to start refusing new XIDs -- because it just isn't designed with\n> debt in mind. But what's the right time, if any? How much debt is too\n> much?\n\nI simply don't see a reason to ever stop the server entirely. I don't\neven agree with the idea of slowing down XID allocation, let alone\nrefusing it completely. When the range of allocated XIDs become too\nlarge, several bad things happen. First, we become unable to allocate\nnew XIDs without corrupting the database. Second, pg_clog and other\nSLRUs become uncomfortably large. There may be some other things too\nthat I'm not thinking about. But these things are not all equally bad.\nIf these were medical problems, being unable to allocate new XIDs\nwithout data corruption would be a heart attack, and SLRUs getting\nbigger on disk would be acne. You don't handle problems of such wildly\ndiffering severity in the same way. When someone is having a heart\nattack, an ambulance rushes them to the hospital, running red lights\nas necessary. When someone has acne, you don't take them to the same\nhospital in the same ambulance and drive it at a slower rate of speed.\nYou do something else entirely, and it's something that is in every\nway much less dramatic. There's no such thing as an attack of acne\nthat's so bad that it requires an ambulance ride, but even a mild\nheart attack should result in a fast trip to the ER. So here. The two\nproblems are so qualitatively different that the responses should also\nbe qualitatively different.\n\n> Admittedly this argument works a lot better with the failsafe than it\n> does with xidStopLimit. Both are removed by the patch.\n\nI don't think the failsafe stuff should be removed, but it should\nprobably be modified in some way. Running out of XIDs is the only\nvalid reason for stopping the world, at least IMO, but it is\ndefinitely NOT the only reason for vacuuming more aggressively.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 11:21:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 29, 2022 at 8:03 AM Chris Travers <chris@orioledata.com> wrote:\n> To be clear, I never suggested shutting down the database. What I have suggested is that repurposing the current approaching-xid-wraparound warnings to start complaining loudly when a threshold is exceeded would be helpful. I think it makes sense to make that threshold configurable especially if we eventually have people running bloat-free table structures.\n>\n> There are two fundamental problems here. The first is that if, as you say, a table is progressively bloating and we are getting further and further behind on vacuuming and freezing, something is seriously wrong and we need to do something about it. In these cases, I my experience is that vacuuming and related tools tend to suffer degraded performance, and determining how to solve the problem takes quite a bit more time than a routine bloat issue would. So what I am arguing against is treating the problem just as a bloat issue. If you get there due to vacuum being slow, something else is wrong and you are probably going to have to find and fix that as well in order to catch up. At least that's my experience.\n>\n> I don't object to the db continuing to run, allocate xids etc. What I object to is it doing so in silently where things are almost certainly going very wrong.\n\nOK. My feeling is that the set of things we can do to warn the user is\nsomewhat limited. I'm open to trying our best, but we need to have\nreasonable expectations. Sophisticated users will be monitoring for\nproblems even if we do nothing to warn, and dumb ones won't look at\ntheir logs. Any feature that proposes to warn must aim at the uses who\nare smart enough to check the logs but dumb enough not to have any\nmore sophisticated monitoring. Such users certainly exist and are not\neven uncommon, but they aren't the only kind by a long shot.\n\nMy argument is that removing xidStopLimit is totally fine, because it\nonly serves to stop the database. What to do about xidWarnLimit is a\nslightly more complex question. Certainly it can't be left untouched,\nbecause warning that we're about to shut down the database for lack of\nallocatable XIDs is not sensible if there is no such lack and we\naren't going to shut it down. But I'm also not sure if the model is\nright. Doing nothing for a long time and then warning in every\ntransaction when some threshold is crossed is an extreme behavior\nchange. Right now that's somewhat justified because we're about to hit\na brick wall at full speed, but if we remove the brick wall and\nreplace it with a gentle pelting with rotten eggs, it's unclear that a\nsimilarly strenuous reaction is the right model. But that's also not\nto say that we should do nothing at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 11:41:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 29, 2022 at 11:41:07AM -0500, Robert Haas wrote:\n> My argument is that removing xidStopLimit is totally fine, because it\n> only serves to stop the database. What to do about xidWarnLimit is a\n> slightly more complex question. Certainly it can't be left untouched,\n> because warning that we're about to shut down the database for lack of\n> allocatable XIDs is not sensible if there is no such lack and we\n> aren't going to shut it down. But I'm also not sure if the model is\n> right. Doing nothing for a long time and then warning in every\n> transaction when some threshold is crossed is an extreme behavior\n> change. Right now that's somewhat justified because we're about to hit\n> a brick wall at full speed, but if we remove the brick wall and\n> replace it with a gentle pelting with rotten eggs, it's unclear that a\n> similarly strenuous reaction is the right model. But that's also not\n> to say that we should do nothing at all.\n\nYeah, we would probably need to warn on every 1 million transactions or\nsomething.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 29 Nov 2022 11:57:40 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Mon, 28 Nov 2022 at 16:53, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n\n> Imagine if we actually had 64-bit XIDs -- let's assume for a moment\n> that it's a done deal. This raises a somewhat awkward question: do you\n> just let the system get further and further behind on freezing,\n> forever? We can all agree that 2 billion XIDs is very likely the wrong\n> time to start refusing new XIDs -- because it just isn't designed with\n> debt in mind. But what's the right time, if any? How much debt is too\n> much?\n\nMy first thought was... why not? Just let the system get further and\nfurther behind on freezing. Where's the harm?\n\nPicture an insert-only database that is receiving data very quickly\nnever having data deleted or modified. vacuum takes several days to\ncomplete and the system wraps 32-bit xid several times a day.\n\nThe DBA asks you why are they even bothering running vacuum? They have\nplenty of storage for clog, latency on selects is not a pain point,\nnot compared to running multi-day vacuums that impact insert times....\n\nThat isn't far off the scenario where I've seen wraparound being a\npain btw. Anti-wraparound vacuum took about 2 days and was kicking off\npretty much as soon as the previous one finished. For a table that was\nmostly read-only.\n\nOf course to make the judgement the DBA needs to have good ways to\nmeasure the space usage of clog, and the overhead caused by clog\nlookups that could be avoided. Then they can judge for themselves how\nmuch freezing is appropriate.\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 29 Nov 2022 18:09:19 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 29, 2022 at 5:57 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 29, 2022 at 11:41:07AM -0500, Robert Haas wrote:\n> > My argument is that removing xidStopLimit is totally fine, because it\n> > only serves to stop the database. What to do about xidWarnLimit is a\n> > slightly more complex question. Certainly it can't be left untouched,\n> > because warning that we're about to shut down the database for lack of\n> > allocatable XIDs is not sensible if there is no such lack and we\n> > aren't going to shut it down. But I'm also not sure if the model is\n> > right. Doing nothing for a long time and then warning in every\n> > transaction when some threshold is crossed is an extreme behavior\n> > change. Right now that's somewhat justified because we're about to hit\n> > a brick wall at full speed, but if we remove the brick wall and\n> > replace it with a gentle pelting with rotten eggs, it's unclear that a\n> > similarly strenuous reaction is the right model. But that's also not\n> > to say that we should do nothing at all.\n>\n> Yeah, we would probably need to warn on every 1 million transactions or\n> something.\n>\n>\nMy proposal would be to make the threshold configurable and start warning\non every transaction after that. There are a couple reasons to do that.\n\nThe first is that noisy warnings are extremely easy to see. You get them\nin cron emails, from psql, in the db logs etc. Having them every million\nmakes them harder to catch.\n\nThe point here is not to ensure there are no problems, but to make sure\nthat an existing layer in the current swiss cheese model of safety doesn't\ngo away. Will it stop all problems? No. But the current warning strategy\nis effective, given how many times we hear of cases of people having to\ntake drastic action to avoid impending xid wraparound.\n\nIf someone has an insert only database and maye doesn't want to ever\nfreeze, they can set the threshold to -1 or something. I would suggest\nkeeping the default as at 2 billion to be in line with existing limitations\nand practices. People can then adjust as they see fit.\n\nWarning text might be something like \"XID Lag Threshold Exceeded. Is\nautovacuum clearing space and keeping up?\"\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n\nOn Tue, Nov 29, 2022 at 5:57 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 29, 2022 at 11:41:07AM -0500, Robert Haas wrote:\n> My argument is that removing xidStopLimit is totally fine, because it\n> only serves to stop the database. What to do about xidWarnLimit is a\n> slightly more complex question. Certainly it can't be left untouched,\n> because warning that we're about to shut down the database for lack of\n> allocatable XIDs is not sensible if there is no such lack and we\n> aren't going to shut it down. But I'm also not sure if the model is\n> right. Doing nothing for a long time and then warning in every\n> transaction when some threshold is crossed is an extreme behavior\n> change. Right now that's somewhat justified because we're about to hit\n> a brick wall at full speed, but if we remove the brick wall and\n> replace it with a gentle pelting with rotten eggs, it's unclear that a\n> similarly strenuous reaction is the right model. But that's also not\n> to say that we should do nothing at all.\n\nYeah, we would probably need to warn on every 1 million transactions or\nsomething.\nMy proposal would be to make the threshold configurable and start warning on every transaction after that.  There are a couple reasons to do that.The first is that noisy warnings are extremely easy to see.  You get them in cron emails, from psql, in the db logs etc.  Having them every million makes them harder to catch.The point here is not to ensure there are no problems, but to make sure that an existing layer in the current swiss cheese model of safety doesn't go away.  Will it stop all problems?  No.  But the current warning strategy is effective, given how many times we hear of cases of people having to take drastic action to avoid impending xid wraparound.If someone has an insert only database and maye doesn't want to ever freeze, they can set the threshold to -1 or something.  I would suggest keeping the default as at 2 billion to be in line with existing limitations and practices.  People can then adjust as they see fit.Warning text might be something like \"XID Lag Threshold Exceeded.  Is autovacuum clearing space and keeping up?\" \n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\nEmbrace your flaws.  They make you human, rather than perfect,\nwhich you will never be.", "msg_date": "Wed, 30 Nov 2022 03:36:44 +0100", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Tue, Nov 29, 2022 at 9:35 PM Chris Travers <chris@orioledata.com> wrote:\n> My proposal would be to make the threshold configurable and start warning on every transaction after that. There are a couple reasons to do that.\n>\n> The first is that noisy warnings are extremely easy to see. You get them in cron emails, from psql, in the db logs etc. Having them every million makes them harder to catch.\n>\n> The point here is not to ensure there are no problems, but to make sure that an existing layer in the current swiss cheese model of safety doesn't go away. Will it stop all problems? No. But the current warning strategy is effective, given how many times we hear of cases of people having to take drastic action to avoid impending xid wraparound.\n>\n> If someone has an insert only database and maye doesn't want to ever freeze, they can set the threshold to -1 or something. I would suggest keeping the default as at 2 billion to be in line with existing limitations and practices. People can then adjust as they see fit.\n>\n> Warning text might be something like \"XID Lag Threshold Exceeded. Is autovacuum clearing space and keeping up?\"\n\nNone of this seems unreasonable to me. If we want to allow more\nconfigurability, we could also let you choose the threshold and the\nfrequency of warnings (every N transactions).\n\nBut, I think we might be getting down a little bit in the weeds. It's\nnot clear that everybody's on board with the proposed page format\nchanges. I'm not completely opposed, but I'm also not wild about the\napproach. It's probably not a good idea to spend all of our energy\ndebating the details of how to reform xidWrapLimit without having some\nconsensus on those points. It is, in a word, bikeshedding: on-disk\npage format changes are hard, but everyone understands warning\nmessages.\n\nLest we miss the forest for the trees, there is an aspect of this\npatch that I find to be an extremely good idea and think we should try\nto get committed even if the rest of the patch set ends up in the\nrubbish bin. Specifically, there are a couple of patches in here that\nhave to do with making SLRUs indexed by 64-bit integers rather than by\n32-bit integers. We've had repeated bugs in the area of handling SLRU\nwraparound in the past, some of which have caused data loss. Just by\nchance, I ran across a situation just yesterday where an SLRU wrapped\naround on disk for reasons that I don't really understand yet and\nchaos ensued. Switching to an indexing system for SLRUs that does not\never wrap around would probably enable us to get rid of a whole bunch\nof crufty code, and would also likely improve the general reliability\nof the system in situations where wraparound is threatened. It seems\nlike a really, really good idea.\n\nI haven't checked the patches to see whether they look correct, and\nI'm concerned in particular about upgrade scenarios. But if there's a\nway we can get that part committed, I think it would be a clear win.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 11:13:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Nov 30, 2022 at 8:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I haven't checked the patches to see whether they look correct, and\n> I'm concerned in particular about upgrade scenarios. But if there's a\n> way we can get that part committed, I think it would be a clear win.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Nov 2022 08:35:15 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, Robert!\n> Lest we miss the forest for the trees, there is an aspect of this\n> patch that I find to be an extremely good idea and think we should try\n> to get committed even if the rest of the patch set ends up in the\n> rubbish bin. Specifically, there are a couple of patches in here that\n> have to do with making SLRUs indexed by 64-bit integers rather than by\n> 32-bit integers. We've had repeated bugs in the area of handling SLRU\n> wraparound in the past, some of which have caused data loss. Just by\n> chance, I ran across a situation just yesterday where an SLRU wrapped\n> around on disk for reasons that I don't really understand yet and\n> chaos ensued. Switching to an indexing system for SLRUs that does not\n> ever wrap around would probably enable us to get rid of a whole bunch\n> of crufty code, and would also likely improve the general reliability\n> of the system in situations where wraparound is threatened. It seems\n> like a really, really good idea.\n\nI totally support the idea that the part related to SLRU is worth\ncommitting whether it is being the first step to 64xid or separately.\nThis subset is discussed in a separate thread [1]. It seems that we\nneed more time to reach a consensus on the implementation of a whole\nbig thing. Just this discussion is a complicated thing and reveals\nmany different aspects concurrently in one thread.\n\nSo I'd vote for an evolutionary approach and give my +1 for\nundertaking efforts to first committing [1] to 16.\n\n[1]: https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n\nKind regards,\nPavel Borisov,\nSupabase.\n\n\n", "msg_date": "Fri, 9 Dec 2022 17:46:12 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "> So I'd vote for an evolutionary approach and give my +1 for\n> undertaking efforts to first committing [1] to 16.\n>\n> [1]:\n> https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n>\n> Kind regards,\n> Pavel Borisov,\n> Supabase.\n>\n\n+1 Totally support the idea. Let's focus on committing SLRU changes.\n\n-- \nBest regards,\nMaxim Orlov.\n\nSo I'd vote for an evolutionary approach and give my +1 for\nundertaking efforts to first committing [1] to 16.\n\n[1]: https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n\nKind regards,\nPavel Borisov,\nSupabase.\n+1 Totally support the idea. Let's focus on committing SLRU changes.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 9 Dec 2022 17:12:45 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nI want to make a quick summary here.\n\n1. An overall consensus has been reached: we shall focus on committing SLRU\nchanges first.\n2. I've created an appropriate patch set here [0].\n3. How [0] is waiting for a review. As always, all opinions will be welcome.\n4. While discussing error/warning messages and some other stuff, this\nthread was marked as \"Waiting on Author\".\n5. I do rebase this patch set once in a week, but do not post it here,\nsince there is no need in it. See (1).\n6. For now, I don't understand what changes I have to make here. So,\ndoes \"Waiting\non Author\" is appropriate status here?\n\nAnyway. Let's discuss on-disk page format, shall we?\n\nAFAICS, we have a following options:\n1. Making \"true\" 64–bit XIDs. I.e. making every tuple have 64–bit xmin and\nxmax fields.\n2. Put special in every page where base for XIDs are stored. This is what\nwe have done in the current patch set.\n3. Put base for XIDs in a fork.\n4. Make explicit 64–bit XIDs for concrete relations. I.e. CREATE TABLE foo\nWITH (xid8) of smth.\n\nThere were opinions that the proposed solution (2) is not the optimal. It\nwould be great to hear your concerns and thoughts.\n\n[0]\nhttps://www.postgresql.org/message-id/CACG%3Dezav34TL%2BfGXD5vJ48%3DQbQBL9BiwkOTWduu9yRqie-h%2BDg%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I want to make a quick summary here. 1. An overall consensus has been reached: we shall focus on committing SLRU changes first.2. I've created an appropriate patch set here [0].3. How [0] is waiting for a review. As always, all opinions will be welcome.4. While discussing error/warning messages and some other stuff, this thread was marked as \"Waiting on Author\".5. I do rebase this patch set once in a week, but do not post it here, since there is no need in it. See (1).6. For now, I don't understand what changes I have to make here. So, does \"Waiting on Author\" is appropriate status here?Anyway. Let's discuss on-disk page format, shall we?AFAICS, we have a following options:1. Making \"true\" 64–bit XIDs. I.e. making every tuple have 64–bit xmin and xmax fields.2. Put special in every page where base for XIDs are stored. This is what we have done in the current patch set.3. Put base for XIDs in a fork.4. Make explicit 64–bit XIDs for concrete relations. I.e. CREATE TABLE foo WITH (xid8) of smth.There were opinions that the proposed solution (2) is not the optimal. It would be great to hear your concerns and thoughts.[0] https://www.postgresql.org/message-id/CACG%3Dezav34TL%2BfGXD5vJ48%3DQbQBL9BiwkOTWduu9yRqie-h%2BDg%40mail.gmail.com-- Best regards,Maxim Orlov.", "msg_date": "Wed, 28 Dec 2022 13:14:16 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Maxim Orlov:\r\n\r\n\r\n>AFAICS, we have a following options:\r\n>1. Making \"true\" 64�Cbit XIDs. I.e. making every tuple have 64�Cbit xmin and >xmax fields.\r\n>2. Put special in every page where base for XIDs are stored. This is what we >have done in the current patch set.\r\n>3. Put base for XIDs in a fork.\r\n>4. Make explicit 64�Cbit XIDs for concrete relations. I.e. CREATE TABLE foo >WITH (xid8) of smth.\r\n\r\nI think the first solution will not be agreed by the core committers, they will consider that the change is too big and will affect the stability of PostgreSQL,I think the second solution is actually quite good, and you've been working on it now,and there are successful cases (opengauss is implemented in this way,In order to save space and be compatible with older versions, opengauss design is to store the xmin/xmax of the head of the tuple in two parts, the xmin/xmax of the head of the tuple is the number of uint32; the header of the page stores the 64-bit xid_base, which is the xid_base of the current page.),I think it's best to stick to this solution now.\r\nOpengauss tuple structure:\r\n[cid:3fae289c-7f88-46be-a775-2d93b1a9c41e]\r\nBest wish\r\n\r\n\r\n\r\n\r\n________________________________\r\n发件人: Maxim Orlov <orlovmg@gmail.com>\r\n发送时间: 2022年12月28日 18:14\r\n收件人: Pavel Borisov <pashkin.elfe@gmail.com>\r\n抄送: Robert Haas <robertmhaas@gmail.com>; Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>; Aleksander Alekseev <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Chris Travers <chris.travers@gmail.com>; Peter Geoghegan <pg@bowt.ie>; Fedor Sigaev <teodor@sigaev.ru>; Alexander Korotkov <aekorotkov@gmail.com>; Konstantin Knizhnik <knizhnik@garret.ru>; Nikita Glukhov <n.gluhov@postgrespro.ru>; Yura Sokolov <y.sokolov@postgrespro.ru>; Simon Riggs <simon.riggs@enterprisedb.com>\r\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\r\n\r\nHi!\r\n\r\nI want to make a quick summary here.\r\n\r\n1. An overall consensus has been reached: we shall focus on committing SLRU changes first.\r\n2. I've created an appropriate patch set here [0].\r\n3. How [0] is waiting for a review. As always, all opinions will be welcome.\r\n4. While discussing error/warning messages and some other stuff, this thread was marked as \"Waiting on Author\".\r\n5. I do rebase this patch set once in a week, but do not post it here, since there is no need in it. See (1).\r\n6. For now, I don't understand what changes I have to make here. So, does \"Waiting on Author\" is appropriate status here?\r\n\r\nAnyway. Let's discuss on-disk page format, shall we?\r\n\r\nAFAICS, we have a following options:\r\n1. Making \"true\" 64�Cbit XIDs. I.e. making every tuple have 64�Cbit xmin and xmax fields.\r\n2. Put special in every page where base for XIDs are stored. This is what we have done in the current patch set.\r\n3. Put base for XIDs in a fork.\r\n4. Make explicit 64�Cbit XIDs for concrete relations. I.e. CREATE TABLE foo WITH (xid8) of smth.\r\n\r\nThere were opinions that the proposed solution (2) is not the optimal. It would be great to hear your concerns and thoughts.\r\n\r\n[0] https://www.postgresql.org/message-id/CACG%3Dezav34TL%2BfGXD5vJ48%3DQbQBL9BiwkOTWduu9yRqie-h%2BDg%40mail.gmail.com\r\n\r\n--\r\nBest regards,\r\nMaxim Orlov.", "msg_date": "Fri, 30 Dec 2022 08:27:57 +0000", "msg_from": "adherent postgres <adherent_postgres@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Maxim,\n\n> Anyway. Let's discuss on-disk page format, shall we?\n\nHere are my two cents.\n\n> AFAICS, we have a following options:\n> [...]\n> 2. Put special in every page where base for XIDs are stored. This is what we have done in the current patch set.\n\nThe approach of using special space IMO is fine. I'm still a bit\nsceptical about the need to introduce a new entity \"64-bit base XID\"\nwhile we already have 32-bit XID epochs that will do the job. I\nsuspect that having fewer entities helps to reason about the code and\nthat it is important in the long run, but maybe it's just me. In any\ncase, I don't have a strong opinion here.\n\nAdditionally, I think we should be clear about the long term goals. As\nPeter G. pointed out above:\n\n> I think that we'll be able to get rid of freezing in a few years time.\n\nIMO eventually getting rid of freezing and \"magic\" XIDs will simplify\nthe maintenance of the project and also make the behaviour of the\nsystem much more predictable. The user will have to worry only about\nthe disk space reclamation.\n\nIf we have a consensus that this is the final goal then we should\ndefinitely be moving toward 64-bit XIDs and perhaps even include a\ncorresponding PoC to the patchset. If we want to keep freezing\nindefinitely then, as Chris Travers argued, 64-bit XIDs don't bring\nthat much value and maybe the community should be focusing on\nsomething else.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 10 Jan 2023 20:42:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "This patch hasn't applied in quite some time, and the thread has moved to\ndiscussing higher lever items rather than the suggested patch, so I'm closing\nthis as Returned with Feedback. Please feel free to resubmit when there is\nrenewed interest and a concensus on how/what to proceed with.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 09:40:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n> This patch hasn't applied in quite some time, and the thread has moved to\n> discussing higher lever items rather than the suggested patch, so I'm closing\n> this as Returned with Feedback. Please feel free to resubmit when there is\n> renewed interest and a concensus on how/what to proceed with.\n\nYes, this thread awaits several other patches to be merged [1] in\norder to continue, so it makes sense to mark it as RwF for the time\nbeing. Thanks!\n\n[1]: https://commitfest.postgresql.org/43/3489/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 4 Jul 2023 13:02:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi!\n\nJust to keep this thread up to date, here's a new version after recent\nchanges in SLRU.\nI'm also change order of the patches in the set, to make adding initdb MOX\noptions after the\n\"core 64 xid\" patch, since MOX patch is unlikely to be committed and now\nfor test purpose only.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 13 Dec 2023 15:25:30 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Maxim Orlov\n Good news,xid64 has achieved a successful first phase,I tried to change\nthe path status (https://commitfest.postgresql.org/43/3594/) ,But it seems\nincorrect\n\nMaxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:\n\n> Hi!\n>\n> Just to keep this thread up to date, here's a new version after recent\n> changes in SLRU.\n> I'm also change order of the patches in the set, to make adding initdb MOX\n> options after the\n> \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now\n> for test purpose only.\n>\n> --\n> Best regards,\n> Maxim Orlov.\n>\n\nHi Maxim Orlov    Good news,xid64 has achieved a successful first phase,I tried to change the path status (https://commitfest.postgresql.org/43/3594/) ,But it seems incorrectMaxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:Hi!Just to keep this thread up to date, here's a new version after recent changes in SLRU.I'm also change order of the patches in the set, to make adding initdb MOX options after the \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now for test purpose only.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 15 Dec 2023 09:51:38 +0800", "msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi, Wenhui!\n\nOn Fri, 15 Dec 2023 at 05:52, wenhui qiu <qiuwenhuifx@gmail.com> wrote:\n\n> Hi Maxim Orlov\n> Good news,xid64 has achieved a successful first phase,I tried to\n> change the path status (https://commitfest.postgresql.org/43/3594/) ,But\n> it seems incorrect\n>\n> Maxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:\n>\n>> Hi!\n>>\n>> Just to keep this thread up to date, here's a new version after recent\n>> changes in SLRU.\n>> I'm also change order of the patches in the set, to make adding initdb\n>> MOX options after the\n>> \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now\n>> for test purpose only.\n>>\n>\nIf the patch is RwF the CF entry is finished and can't be enabled, rather\nthe patch needs to be submitted in a new entry, which I have just done.\nhttps://commitfest.postgresql.org/46/4703/\n\nPlease feel free to submit your review.\n\nKind regards,\nPavel Borisov,\nSupabase\n\nHi, Wenhui!On Fri, 15 Dec 2023 at 05:52, wenhui qiu <qiuwenhuifx@gmail.com> wrote:Hi Maxim Orlov    Good news,xid64 has achieved a successful first phase,I tried to change the path status (https://commitfest.postgresql.org/43/3594/) ,But it seems incorrectMaxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:Hi!Just to keep this thread up to date, here's a new version after recent changes in SLRU.I'm also change order of the patches in the set, to make adding initdb MOX options after the \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now for test purpose only.If the patch is RwF the CF entry is finished and can't be enabled, rather the patch needs to be submitted in a new entry, which I have just done.https://commitfest.postgresql.org/46/4703/Please feel free to submit your review.Kind regards,Pavel Borisov,Supabase", "msg_date": "Fri, 15 Dec 2023 13:13:33 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi Pavel Borisov\n Many thanks\n\nBest whish\n\nPavel Borisov <pashkin.elfe@gmail.com> 于2023年12月15日周五 17:13写道:\n\n> Hi, Wenhui!\n>\n> On Fri, 15 Dec 2023 at 05:52, wenhui qiu <qiuwenhuifx@gmail.com> wrote:\n>\n>> Hi Maxim Orlov\n>> Good news,xid64 has achieved a successful first phase,I tried to\n>> change the path status (https://commitfest.postgresql.org/43/3594/) ,But\n>> it seems incorrect\n>>\n>> Maxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:\n>>\n>>> Hi!\n>>>\n>>> Just to keep this thread up to date, here's a new version after recent\n>>> changes in SLRU.\n>>> I'm also change order of the patches in the set, to make adding initdb\n>>> MOX options after the\n>>> \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now\n>>> for test purpose only.\n>>>\n>>\n> If the patch is RwF the CF entry is finished and can't be enabled, rather\n> the patch needs to be submitted in a new entry, which I have just done.\n> https://commitfest.postgresql.org/46/4703/\n>\n> Please feel free to submit your review.\n>\n> Kind regards,\n> Pavel Borisov,\n> Supabase\n>\n\nHi Pavel Borisov     Many thanksBest whishPavel Borisov <pashkin.elfe@gmail.com> 于2023年12月15日周五 17:13写道:Hi, Wenhui!On Fri, 15 Dec 2023 at 05:52, wenhui qiu <qiuwenhuifx@gmail.com> wrote:Hi Maxim Orlov    Good news,xid64 has achieved a successful first phase,I tried to change the path status (https://commitfest.postgresql.org/43/3594/) ,But it seems incorrectMaxim Orlov <orlovmg@gmail.com> 于2023年12月13日周三 20:26写道:Hi!Just to keep this thread up to date, here's a new version after recent changes in SLRU.I'm also change order of the patches in the set, to make adding initdb MOX options after the \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now for test purpose only.If the patch is RwF the CF entry is finished and can't be enabled, rather the patch needs to be submitted in a new entry, which I have just done.https://commitfest.postgresql.org/46/4703/Please feel free to submit your review.Kind regards,Pavel Borisov,Supabase", "msg_date": "Fri, 15 Dec 2023 17:24:21 +0800", "msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Wed, Dec 13, 2023 at 5:56 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n> Just to keep this thread up to date, here's a new version after recent changes in SLRU.\n> I'm also change order of the patches in the set, to make adding initdb MOX options after the\n> \"core 64 xid\" patch, since MOX patch is unlikely to be committed and now for test purpose only.\n\nI tried to apply the patch but it is failing at the Head. It is giving\nthe following error:\nHunk #1 succeeded at 601 (offset 5 lines).\npatching file src/backend/replication/slot.c\npatching file src/backend/replication/walreceiver.c\npatching file src/backend/replication/walsender.c\nHunk #1 succeeded at 2434 (offset 160 lines).\npatching file src/backend/storage/ipc/procarray.c\nHunk #1 succeeded at 1115 with fuzz 2.\nHunk #3 succeeded at 1286 with fuzz 2.\nHunk #7 FAILED at 4341.\nHunk #8 FAILED at 4899.\nHunk #9 FAILED at 4959.\n3 out of 10 hunks FAILED -- saving rejects to file\nsrc/backend/storage/ipc/procarray.c.rej\npatching file src/backend/storage/ipc/standby.c\nHunk #1 FAILED at 1043.\nHunk #2 FAILED at 1370.\n2 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/storage/ipc/standby.c.rej\nPlease send the Re-base version of the patch.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:26:53 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n> Please send the Re-base version of the patch.\n\nPFA the rebased patchset.\n\nIn order to keep the scope reasonable I suggest we focus on\n0001...0005 for now. 0006+ are difficult to rebase / review and I'm a\nbit worried for the committer who will merge them. We can return to\n0006+ when we deal with the first 5 patches. These patches can be\ndelivered to PG18 independently, as we did with SLRU in the PG17\ncycle.\n\nTested on Intel MacOS w/ Autotools and ARM Linux w/ Meson.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 19 Jun 2024 13:36:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n> > Please send the Re-base version of the patch.\n>\n> PFA the rebased patchset.\n>\n> In order to keep the scope reasonable I suggest we focus on\n> 0001...0005 for now. 0006+ are difficult to rebase / review and I'm a\n> bit worried for the committer who will merge them. We can return to\n> 0006+ when we deal with the first 5 patches. These patches can be\n> delivered to PG18 independently, as we did with SLRU in the PG17\n> cycle.\n>\n> Tested on Intel MacOS w/ Autotools and ARM Linux w/ Meson.\n\ncfbot revealed a bug in 0004:\n\n```\n../src/backend/access/common/reloptions.c:1842:26: runtime error:\nstore to misaligned address 0x55a5c6d64d94 for type 'int64', which\nrequires 8 byte alignment\n0x55a5c6d64d94: note: pointer points here\n ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n00 00 00 00 00 00 ff ff ff ff\n ^\n==18945==Using libbacktrace symbolizer.\n #0 0x55a5c4a9b273 in fillRelOptions\n../src/backend/access/common/reloptions.c:1842\n #1 0x55a5c4a9c709 in build_reloptions\n../src/backend/access/common/reloptions.c:2002\n #2 0x55a5c4a9c739 in default_reloptions\n../src/backend/access/common/reloptions.c:1957\n #3 0x55a5c4a9ccc0 in heap_reloptions\n../src/backend/access/common/reloptions.c:2109\n #4 0x55a5c4da98d6 in DefineRelation ../src/backend/commands/tablecmds.c:858\n #5 0x55a5c52549ac in ProcessUtilitySlow ../src/backend/tcop/utility.c:1164\n #6 0x55a5c52545a2 in standard_ProcessUtility\n../src/backend/tcop/utility.c:1067\n #7 0x55a5c52546fd in ProcessUtility ../src/backend/tcop/utility.c:523\n #8 0x55a5c524fe5b in PortalRunUtility ../src/backend/tcop/pquery.c:1158\n #9 0x55a5c5250531 in PortalRunMulti ../src/backend/tcop/pquery.c:1315\n #10 0x55a5c5250bd6 in PortalRun ../src/backend/tcop/pquery.c:791\n #11 0x55a5c5249e44 in exec_simple_query ../src/backend/tcop/postgres.c:1274\n #12 0x55a5c524cc07 in PostgresMain ../src/backend/tcop/postgres.c:4680\n #13 0x55a5c524d18c in PostgresSingleUserMain\n../src/backend/tcop/postgres.c:4136\n #14 0x55a5c4f155e2 in main ../src/backend/main/main.c:194\n #15 0x7fc3a77e5d09 in __libc_start_main\n(/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n #16 0x55a5c4a6a249 in _start\n(/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/bin/postgres+0x8e1249)\n\nAborted (core dumped)\n```\n\nHere is the fix. It can be tested like this:\n\n```\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -1839,6 +1839,7 @@ fillRelOptions(void *rdopts, Size basesize,\n ((relopt_int *) options[i].gen)->default_val;\n break;\n case RELOPT_TYPE_INT64:\n+ Assert((((uint64)itempos) & 0x7) == 0);\n *(int64 *) itempos = options[i].isset ?\n options[i].values.int64_val :\n ((relopt_int64 *) options[i].gen)->default_val;\n```\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 19 Jun 2024 16:22:21 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n> Here is the fix. It can be tested like this:\n> [...]\n\nPFA the rebased patchset.\n\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 23 Jul 2024 12:13:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On 23.07.24 11:13, Aleksander Alekseev wrote:\n>> Here is the fix. It can be tested like this:\n>> [...]\n> \n> PFA the rebased patchset.\n\nI'm wondering about the 64-bit GUCs.\n\nAt first, it makes sense that if there are settings that are counted in \nterms of transactions, and transaction numbers are 64-bit integers, then \nthose settings should accept 64-bit integers.\n\nBut what is the purpose and effect of setting those parameters to such \nhuge numbers? For example, what is the usability of being able to set\n\nvacuum_failsafe_age = 500000000000\n\nI think in the world of 32-bit transaction IDs, you can intuitively \ninterpret most of these \"transaction age\" settings as \"percent toward \ndisaster\". For example,\n\nvacuum_freeze_table_age = 150000000\n\nis 7% toward disaster, and\n\nvacuum_failsafe_age = 1600000000\n\nis 75% toward disaster.\n\nHowever, if there is no more disaster threshold at 2^31, what is the \nguidance for setting these? Or more radically, why even run \ntransaction-count-based vacuum at all?\n\nConversely, if there is still some threshold (not disaster, but \nefficiency or something else), would it still be useful to keep these \nsettings well below 2^31? In which case, we might not need 64-bit GUCs.\n\nYour 0004 patch adds support for 64-bit GUCs but doesn't actually \nconvert any existing GUCs to use that. (Unlike the reloptions, which \nyour patch coverts.) And so there is no documentation about these \nquestions.\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 12:19:39 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On 25/07/2024 13:19, Peter Eisentraut wrote:\n> I'm wondering about the 64-bit GUCs.\n> \n> At first, it makes sense that if there are settings that are counted in \n> terms of transactions, and transaction numbers are 64-bit integers, then \n> those settings should accept 64-bit integers.\n> \n> But what is the purpose and effect of setting those parameters to such \n> huge numbers?  For example, what is the usability of being able to set\n> \n> vacuum_failsafe_age = 500000000000\n> \n> I think in the world of 32-bit transaction IDs, you can intuitively \n> interpret most of these \"transaction age\" settings as \"percent toward \n> disaster\".  For example,\n> \n> vacuum_freeze_table_age = 150000000\n> \n> is 7% toward disaster, and\n> \n> vacuum_failsafe_age = 1600000000\n> \n> is 75% toward disaster.\n> \n> However, if there is no more disaster threshold at 2^31, what is the \n> guidance for setting these?  Or more radically, why even run \n> transaction-count-based vacuum at all?\n\nTo allow the CLOG to be truncated. There's no disaster anymore, but \nwithout freezing, the clog will grow indefinitely.\n\n> Conversely, if there is still some threshold (not disaster, but \n> efficiency or something else), would it still be useful to keep these \n> settings well below 2^31?  In which case, we might not need 64-bit GUCs.\n\nYeah, I don't think it's critical. It makes sense to switch to 64 bit \nGUCs, so that you can make those settings higher, but it's not critical \nor strictly required for the rest of the work.\n\nAnother approach is to make the GUCs to mean \"thousands of XIDs\", \nsimilar to how many of our memory settings are in kB rather than bytes. \nthat might be a super confusing change for existing settings though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 14:09:10 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, Jul 25, 2024 at 5:19 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 23.07.24 11:13, Aleksander Alekseev wrote:\n> >> Here is the fix. It can be tested like this:\n> >> [...]\n> >\n> > PFA the rebased patchset.\n>\n> I'm wondering about the 64-bit GUCs.\n>\n> At first, it makes sense that if there are settings that are counted in\n> terms of transactions, and transaction numbers are 64-bit integers, then\n> those settings should accept 64-bit integers.\n>\n> But what is the purpose and effect of setting those parameters to such\n> huge numbers? For example, what is the usability of being able to set\n>\n> vacuum_failsafe_age = 500000000000\n>\n\nAlso in the rebased patch set I cannot find the above, so I cannot evaluate\nwhat it does.\n\nIn the past I have pushed for some mechanism to produce warnings like we\ncurrently have approaching xid wraparound when a certain threshold is met.\nIs this that mechanism?\n\n>\n> I think in the world of 32-bit transaction IDs, you can intuitively\n> interpret most of these \"transaction age\" settings as \"percent toward\n> disaster\". For example,\n>\n> vacuum_freeze_table_age = 150000000\n>\n> is 7% toward disaster, and\n>\n> vacuum_failsafe_age = 1600000000\n>\n> is 75% toward disaster.\n>\n> However, if there is no more disaster threshold at 2^31, what is the\n> guidance for setting these? Or more radically, why even run\n> transaction-count-based vacuum at all?\n>\n> Conversely, if there is still some threshold (not disaster, but\n> efficiency or something else), would it still be useful to keep these\n> settings well below 2^31? In which case, we might not need 64-bit GUCs.\n>\n> Your 0004 patch adds support for 64-bit GUCs but doesn't actually\n> convert any existing GUCs to use that. (Unlike the reloptions, which\n> your patch coverts.) And so there is no documentation about these\n> questions.\n>\n>\n\n-- \nBest Wishes,\nChris Travers\n\nEfficito: Hosted Accounting and ERP. Robust and Flexible. No vendor\nlock-in.\nhttp://www.efficito.com/learn_more\n\nOn Thu, Jul 25, 2024 at 5:19 PM Peter Eisentraut <peter@eisentraut.org> wrote:On 23.07.24 11:13, Aleksander Alekseev wrote:\n>> Here is the fix. It can be tested like this:\n>> [...]\n> \n> PFA the rebased patchset.\n\nI'm wondering about the 64-bit GUCs.\n\nAt first, it makes sense that if there are settings that are counted in \nterms of transactions, and transaction numbers are 64-bit integers, then \nthose settings should accept 64-bit integers.\n\nBut what is the purpose and effect of setting those parameters to such \nhuge numbers?  For example, what is the usability of being able to set\n\nvacuum_failsafe_age = 500000000000Also in the rebased patch set I cannot find the above, so I cannot evaluate what it does.In the past I have pushed for some mechanism to produce warnings like we currently have approaching xid wraparound when a certain threshold is met.  Is this that mechanism? \n\nI think in the world of 32-bit transaction IDs, you can intuitively \ninterpret most of these \"transaction age\" settings as \"percent toward \ndisaster\".  For example,\n\nvacuum_freeze_table_age = 150000000\n\nis 7% toward disaster, and\n\nvacuum_failsafe_age = 1600000000\n\nis 75% toward disaster.\n\nHowever, if there is no more disaster threshold at 2^31, what is the \nguidance for setting these?  Or more radically, why even run \ntransaction-count-based vacuum at all?\n\nConversely, if there is still some threshold (not disaster, but \nefficiency or something else), would it still be useful to keep these \nsettings well below 2^31?  In which case, we might not need 64-bit GUCs.\n\nYour 0004 patch adds support for 64-bit GUCs but doesn't actually \nconvert any existing GUCs to use that.  (Unlike the reloptions, which \nyour patch coverts.)  And so there is no documentation about these \nquestions.\n\n-- Best Wishes,Chris TraversEfficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.http://www.efficito.com/learn_more", "msg_date": "Thu, 25 Jul 2024 19:18:47 +0700", "msg_from": "Chris Travers <chris.travers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On 25.07.24 13:09, Heikki Linnakangas wrote:\n>> However, if there is no more disaster threshold at 2^31, what is the \n>> guidance for setting these?  Or more radically, why even run \n>> transaction-count-based vacuum at all?\n> \n> To allow the CLOG to be truncated. There's no disaster anymore, but \n> without freezing, the clog will grow indefinitely.\n\nMaybe a setting similar to max_wal_size could be better for that?\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 15:31:05 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Maybe a setting similar to max_wal_size could be better for that?\n+1\n\nThanks\n\nPeter Eisentraut <peter@eisentraut.org> 于2024年7月25日周四 21:31写道:\n\n> On 25.07.24 13:09, Heikki Linnakangas wrote:\n> >> However, if there is no more disaster threshold at 2^31, what is the\n> >> guidance for setting these? Or more radically, why even run\n> >> transaction-count-based vacuum at all?\n> >\n> > To allow the CLOG to be truncated. There's no disaster anymore, but\n> > without freezing, the clog will grow indefinitely.\n>\n> Maybe a setting similar to max_wal_size could be better for that?\n>\n>\n>\n>\n\nMaybe a setting similar to max_wal_size could be better for that?+1Thanks Peter Eisentraut <peter@eisentraut.org> 于2024年7月25日周四 21:31写道:On 25.07.24 13:09, Heikki Linnakangas wrote:\n>> However, if there is no more disaster threshold at 2^31, what is the \n>> guidance for setting these?  Or more radically, why even run \n>> transaction-count-based vacuum at all?\n> \n> To allow the CLOG to be truncated. There's no disaster anymore, but \n> without freezing, the clog will grow indefinitely.\n\nMaybe a setting similar to max_wal_size could be better for that?", "msg_date": "Thu, 25 Jul 2024 21:54:08 +0800", "msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "On Thu, Jul 25, 2024 at 02:09:10PM +0300, Heikki Linnakangas wrote:\n> On 25/07/2024 13:19, Peter Eisentraut wrote:\n>> Conversely, if there is still some threshold (not disaster, but\n>> efficiency or something else), would it still be useful to keep these\n>> settings well below 2^31?  In which case, we might not need 64-bit GUCs.\n> \n> Yeah, I don't think it's critical. It makes sense to switch to 64 bit GUCs,\n> so that you can make those settings higher, but it's not critical or\n> strictly required for the rest of the work.\n\nIt looks like we have some sort of consensus to introduce these GUC\nAPIs, then? I'd suggest to split that into its own thread because it\ncan be treated as an independent subject. (I know that this was\ndiscussed previously, but it's been some time and this is going to\nrequire a rebased version anyway, so..)\n\n> Another approach is to make the GUCs to mean \"thousands of XIDs\", similar to\n> how many of our memory settings are in kB rather than bytes. that might be a\n> super confusing change for existing settings though.\n\nI find that a bit confusing as well, still that would be OK in terms\nof compatibility as long as you enforce the presence of a unit and\nleave the default behavior alone. So it does not sound that bad to\nbe, either. It is also a bit simpler to set for users. Less zeros to\ndeal with.\n\nAleksander Alekseev has proposed to remove short file names from SLRUs\nto cimplify its internals, as of this thread, so that's one less topic\nto deal with here:\nhttps://www.postgresql.org/message-id/CAJ7c6TOy7fUW9MuNeOWor3cSFnQg9tgz%3DmjXHDb94GORtM_Eyg%40mail.gmail.com\n\nI am unclear about the rest of the thread. Considering v55-0004 and\nv55-0003 as two different topics, what should we do with the other\nthings? It does not seem to me that we have a clear consensus on if\nwe are going to do something about some epoch:xid -> 64b XID switch,\nand what are the arguments in play here. The thread has been long, so\nI may be just missing the details but a summary would be nice.\n--\nMichael", "msg_date": "Thu, 12 Sep 2024 09:12:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Apparently, the original thread will inevitably disintegrate into many\nseparate ones.\nFor me, looks like some kind of road map. One for 64-bit GUCs, another one\nto remove\nshort file names from SLRUs and, to make things more complicated, [1] for\ncf entry [0],\nto get rid of MultiXactOffset wraparound by switching to 64 bits.\n\n[0] https://commitfest.postgresql.org/49/5205/\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezaWg7_nt-8ey4aKv2w9LcuLthHknwCawmBgEeTnJrJTcw%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.\n\nApparently, the original thread will inevitably disintegrate into many separate ones.For me, looks like some kind of road map. One for 64-bit GUCs, another one to remove short file names from SLRUs and, to make things more complicated, [1] for cf entry [0], to get rid of MultiXactOffset wraparound by switching to 64 bits.[0] https://commitfest.postgresql.org/49/5205/[1] https://www.postgresql.org/message-id/flat/CACG%3DezaWg7_nt-8ey4aKv2w9LcuLthHknwCawmBgEeTnJrJTcw%40mail.gmail.com-- Best regards,Maxim Orlov.", "msg_date": "Thu, 12 Sep 2024 11:49:37 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Michael, Maxim,\n\n> Apparently, the original thread will inevitably disintegrate into many separate ones.\n> For me, looks like some kind of road map. One for 64-bit GUCs, another one to remove\n> short file names from SLRUs and, to make things more complicated, [1] for cf entry [0],\n> to get rid of MultiXactOffset wraparound by switching to 64 bits.\n>\n> [0] https://commitfest.postgresql.org/49/5205/\n> [1] https://www.postgresql.org/message-id/flat/CACG%3DezaWg7_nt-8ey4aKv2w9LcuLthHknwCawmBgEeTnJrJTcw%40mail.gmail.com\n\nAgree.\n\nTo me it seems like we didn't reach a consensus regarding switching to\n64-bit XIDs. Given that and the fact that the patchset is rather\ndifficult to rebase (and review) I suggest focusing on something we\nreached a consensus for. I'm going to close a CF entry for this\nparticular thread as RwF unless anyone objects. We can always return\nto this later, preferably knowing that there is a particular committer\nwho has time and energy for merging this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 12 Sep 2024 12:52:18 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" }, { "msg_contents": "Hi,\n\n> Agree.\n>\n> To me it seems like we didn't reach a consensus regarding switching to\n> 64-bit XIDs. Given that and the fact that the patchset is rather\n> difficult to rebase (and review) I suggest focusing on something we\n> reached a consensus for. I'm going to close a CF entry for this\n> particular thread as RwF unless anyone objects. We can always return\n> to this later, preferably knowing that there is a particular committer\n> who has time and energy for merging this.\n\nI started a new thread and opened a new CF entry for Int64 GUCs:\n\nhttps://commitfest.postgresql.org/50/5253/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:12:41 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15" } ]
[ { "msg_contents": "Hi!\n\nWe are working on custom toaster for JSONB [1], because current TOAST is \nuniversal for any data type and because of that it has some disadvantages:\n   - \"one toast fits all\"  may be not the best solution for particular\n     type or/and use cases\n   - it doesn't know the internal structure of data type, so it  cannot\n choose an optimal toast strategy\n   - it can't  share common parts between different rows and even\n     versions of rows\n\nModification of current toaster for all tasks and cases looks too \ncomplex, moreover, it  will not works for  custom data types. Postgres \nis an extensible database,  why not to extent its extensibility even \nfurther, to have pluggable TOAST! We  propose an idea to separate \ntoaster from  heap using  toaster API similar to table AM API etc. \nFollowing patches are applicable over patch in [1]\n\n1) 1_toaster_interface_v1.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_interface\n  Introduces  syntax for storage and formal toaster API. Adds column \natttoaster to pg_attribute, by design this column should not be equal to \ninvalid oid for any toastable datatype, ie it must have correct oid for \nany type (not column) with non-plain storage. Since  toaster may support \nonly particular datatype, core should check correctness of toaster set \nby toaster validate method. New commands could be found in \nsrc/test/regress/sql/toaster.sql\n\nOn-disk toast pointer structure now has one more possible struct - \nvaratt_custom with fixed header and variable tail which uses as a \nstorage for custom toasters. Format of built-in toaster is kept to allow \nsimple pg_upgrade logic.\n\nSince toaster for column could be changed during table's lifetime we had \ntwo options about toaster's drop operation:\n   - if column's toaster has been changed,  then we need to re-toast all\n     values, which could be extremely expensive. In any case,\n     functions/operators should be ready to work with values toasted by\n     different toasters, although any toaster should execute simple\n     toast/detoast operation, which allows any existing code to\n     work with the new approach. Tracking dependency of toasters and\n rows looks as bad idea.\n   - disallow drop toaster. We don't believe that there will be many\n     toasters at the same time (number of AM isn't very high too and\n     we don't believe that it will be changed significantly in the near\n     future), so prohibition of  dropping  of toaster looks reasonable.\nIn this patch set we choose second option.\n\nToaster API includes get_vtable method, which is planned to access the \ncustom toaster features which isn't covered by this API.  The idea is, \nthat toaster returns some structure with some values and/or pointers to \ntoaster's methods and caller could use it for particular purposes, see \npatch 4). Kind of structure identified by magic number, which should be \na first field in this structure.\n\nAlso added contrib/dummy_toaster to simplify checking.\n\npsql/pg_dump are modified to support toaster object concept.\n\n2) 2_toaster_default_v1.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_default\nBuilt-in toaster implemented (with some refactoring)  uisng toaster API \nas generic (or default) toaster.  dummy_toaster here is a minimal \nworkable example, it saves value directly in toast pointer and fails if \nvalue is greater than 1kb.\n\n3) 3_toaster_snapshot_v1.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_snapshot\nThe patch implements technology to distinguish row's versions in toasted \nvalues to share common parts of toasted values between different \nversions of rows\n\n4) 4_bytea_appendable_toaster_v1.patch.gz\nhttps://github.com/postgrespro/postgres/tree/bytea_appendable_toaster\nContrib module implements toaster for non-compressed bytea columns, \nwhich allows fast appending to existing bytea value. Appended tail \nstored directly in toaster pointer, if there is enough place to do it.\n\nNote: patch modifies byteacat() to support contrib toaster. Seems, it's \nlooks ugly and contrib module should create new concatenation function.\n\nWe are open for any questions, discussions, objections and advices.\nThank you.\n\nPeoples behind:\nOleg Bartunov\nNikita Gluhov\nNikita Malakhov\nTeodor Sigaev\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/de83407a-ae3d-a8e1-a788-920eb334f25b@sigaev.ru \n<https://www.postgresql.org/message-id/flat/de83407a-ae3d-a8e1-a788-920eb334f25b@sigaev.ru>\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/", "msg_date": "Thu, 30 Dec 2021 19:40:09 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Pluggable toaster" }, { "msg_contents": "On Thu, 30 Dec 2021 at 16:40, Teodor Sigaev <teodor@sigaev.ru> wrote:\n\n> We are working on custom toaster for JSONB [1], because current TOAST is\n> universal for any data type and because of that it has some disadvantages:\n> - \"one toast fits all\" may be not the best solution for particular\n> type or/and use cases\n> - it doesn't know the internal structure of data type, so it cannot\n> choose an optimal toast strategy\n> - it can't share common parts between different rows and even\n> versions of rows\n\nAgreed, Oleg has made some very clear analysis of the value of having\na higher degree of control over toasting from within the datatype.\n\nIn my understanding, we want to be able to\n1. Access data from a toasted object one slice at a time, by using\nknowledge of the structure\n2. If toasted data is updated, then update a minimum number of\nslices(s), without rewriting the existing slices\n3. If toasted data is expanded, then allownew slices to be appended to\nthe object without rewriting the existing slices\n\n> Modification of current toaster for all tasks and cases looks too\n> complex, moreover, it will not works for custom data types. Postgres\n> is an extensible database, why not to extent its extensibility even\n> further, to have pluggable TOAST! We propose an idea to separate\n> toaster from heap using toaster API similar to table AM API etc.\n> Following patches are applicable over patch in [1]\n\nISTM that we would want the toast algorithm to be associated with the\ndatatype, not the column?\nCan you explain your thinking?\n\nWe already have Expanded toast format, in-memory, which was designed\nspecifically to allow us to access sub-structure of the datatype\nin-memory. So I was expecting to see an Expanded, on-disk, toast\nformat that roughly matched that concept, since Tom has already shown\nus the way. (varatt_expanded). This would be usable by both JSON and\nPostGIS.\n\n\nSome other thoughts:\n\nI imagine the data type might want to keep some kind of dictionary\ninside the main toast pointer, so we could make allowance for some\noptional datatype-specific private area in the toast pointer itself,\nallowing a mix of inline and out-of-line data, and/or a table of\ncontents to the slices.\n\nI'm thinking could also tackle these things at the same time:\n* We want to expand TOAST to 64-bit pointers, so we can have more\npointers in a table\n* We want to avoid putting the data length into the toast pointer, so\nwe can allow the toasted data to be expanded without rewriting\neverything (to avoid O(N^2) cost)\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 5 Jan 2022 14:45:56 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi all!\n\nSimon, thank you for your review.\nI'll try to give a brief explanation on some topics you've mentioned.\nMy colleagues would correct me if I miss the point and provide some more\ndetails.\n\n>Agreed, Oleg has made some very clear analysis of the value of having\n>a higher degree of control over toasting from within the datatype.\nCurrently we see the biggest flaw in TOAST functionality is that it\ndoes not provide any means for extension and modification except\nmodifying the core code itself. It is not possible to use any other\nTOAST strategy except existing in the core, the same issue is with\nassigning different TOAST methods to columns and datatypes.\nThe main point in this patch is actually to provide an open API and\nsyntax for creation of new Toasters as pluggable extensions, and\nto make an existing (default) toaster to work via this API without\naffecting its function. Also, the default toaster is strongly cross-tied\nwith Heap access, with somewhat unclear code relations (headers,\nfunction locations and calls, etc.) that are not quite good logically\nstructured and ask to be straightened out.\n\n>In my understanding, we want to be able to\n>1. Access data from a toasted object one slice at a time, by using\n>knowledge of the structure\n>2. If toasted data is updated, then update a minimum number of\n>slices(s), without rewriting the existing slices\n>3. If toasted data is expanded, then allownew slices to be appended to\n>the object without rewriting the existing slices\nThere are two main ideas behind Pluggable Toaster patch -\nFirst - to provide an extensible API for all Postgres developers, to\nbe able to develop and plug in custom toasters as independent\nextensions for different data types and columns, to use different\ntoast strategies, access and compression methods, and so on;\nSecond - to refactor current Toast functionality, to improve Toast\ncode structure and make it more logically structured and\nunderstandable, to 'detach' default ('generic', as it is currently\nnamed, or maybe the best naming for it to be 'heap') toaster from DBMS\ncore code, route it through new API and hide all existing internal\nspecific Toast functionality behind new API.\n\nAll the points you mentioned are made available for development by\nthis patch (and, actually, some are being developed - in the\nbytea_appendable_toaster part of this patch or jSONb toaster by Nikita\nGlukhov, he could provide much better explanation on this topic).\n\n>> Modification of current toaster for all tasks and cases looks too\n>> complex, moreover, it will not works for custom data types. Postgres\n>> is an extensible database, why not to extent its extensibility even\n>> further, to have pluggable TOAST! We propose an idea to separate\n>> toaster from heap using toaster API similar to table AM API etc.\n>> Following patches are applicable over patch in [1]\n\n>ISTM that we would want the toast algorithm to be associated with the\n>datatype, not the column?\n>Can you explain your thinking?\nThis possibility is considered for future development.\n\n>We already have Expanded toast format, in-memory, which was designed\n>specifically to allow us to access sub-structure of the datatype\n>in-memory. So I was expecting to see an Expanded, on-disk, toast\n>format that roughly matched that concept, since Tom has already shown\n>us the way. (varatt_expanded). This would be usable by both JSON and\n>PostGIS.\nThe main disadvantage is that it does not suppose either usage of any\nother toasting strategies, or compressions methods except plgz and\nlz4.\n\n>Some other thoughts:\n\n>I imagine the data type might want to keep some kind of dictionary\n>inside the main toast pointer, so we could make allowance for some\n>optional datatype-specific private area in the toast pointer itself,\n>allowing a mix of inline and out-of-line data, and/or a table of\n>contents to the slices.\nIt is partly implemented in jSONb custom Toaster, as I mentioned\nabove, and also could be considered for future improvement of existing\nToaster as an extension.\n\n>I'm thinking could also tackle these things at the same time:\n>* We want to expand TOAST to 64-bit pointers, so we can have more\n>pointers in a table\nThis issue is being discussed but not currently implemented, it was\nconsidered as one of the possible future improvements.\n\n>* We want to avoid putting the data length into the toast pointer, so\n>we can allow the toasted data to be expanded without rewriting\n>everything (to avoid O(N^2) cost)\nMay I correct you - actual relation is O(N), not O(N^2).\nCurrently data length is stored outside customized toaster data, in\nthe varatt_custom structure that is supposed to be used by all custom\n(extended) toasters. Data that is specific to some custom Toasted will\nbe stored inside va_toasterdata structure.\n\nLooking forward to your thoughts on our work.\n\n--\nBest regards,\nNikita A. Malakhov\n\nOn Wed, Jan 5, 2022 at 5:46 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Thu, 30 Dec 2021 at 16:40, Teodor Sigaev <teodor@sigaev.ru> wrote:\n>\n> > We are working on custom toaster for JSONB [1], because current TOAST is\n> > universal for any data type and because of that it has some\n> disadvantages:\n> > - \"one toast fits all\" may be not the best solution for particular\n> > type or/and use cases\n> > - it doesn't know the internal structure of data type, so it cannot\n> > choose an optimal toast strategy\n> > - it can't share common parts between different rows and even\n> > versions of rows\n>\n> Agreed, Oleg has made some very clear analysis of the value of having\n> a higher degree of control over toasting from within the datatype.\n>\n> In my understanding, we want to be able to\n> 1. Access data from a toasted object one slice at a time, by using\n> knowledge of the structure\n> 2. If toasted data is updated, then update a minimum number of\n> slices(s), without rewriting the existing slices\n> 3. If toasted data is expanded, then allownew slices to be appended to\n> the object without rewriting the existing slices\n>\n> > Modification of current toaster for all tasks and cases looks too\n> > complex, moreover, it will not works for custom data types. Postgres\n> > is an extensible database, why not to extent its extensibility even\n> > further, to have pluggable TOAST! We propose an idea to separate\n> > toaster from heap using toaster API similar to table AM API etc.\n> > Following patches are applicable over patch in [1]\n>\n> ISTM that we would want the toast algorithm to be associated with the\n> datatype, not the column?\n> Can you explain your thinking?\n>\n> We already have Expanded toast format, in-memory, which was designed\n> specifically to allow us to access sub-structure of the datatype\n> in-memory. So I was expecting to see an Expanded, on-disk, toast\n> format that roughly matched that concept, since Tom has already shown\n> us the way. (varatt_expanded). This would be usable by both JSON and\n> PostGIS.\n>\n>\n> Some other thoughts:\n>\n> I imagine the data type might want to keep some kind of dictionary\n> inside the main toast pointer, so we could make allowance for some\n> optional datatype-specific private area in the toast pointer itself,\n> allowing a mix of inline and out-of-line data, and/or a table of\n> contents to the slices.\n>\n> I'm thinking could also tackle these things at the same time:\n> * We want to expand TOAST to 64-bit pointers, so we can have more\n> pointers in a table\n> * We want to avoid putting the data length into the toast pointer, so\n> we can allow the toasted data to be expanded without rewriting\n> everything (to avoid O(N^2) cost)\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n>\n\nHi all!Simon, thank you for your review.I'll try to give a brief explanation on some topics you've mentioned.My colleagues would correct me if I miss the point and provide some more details.>Agreed, Oleg has made some very clear analysis of the value of having >a higher degree of control over toasting from within the datatype.Currently we see the biggest flaw in TOAST functionality is that it does not provide any means for extension and modification except modifying the core code itself. It is not possible to use any otherTOAST strategy except existing in the core, the same issue is with assigning different TOAST methods to columns and datatypes. The main point in this patch is actually to provide an open API and syntax for creation of new Toasters as pluggable extensions, and to make an existing (default) toaster to work via this API without affecting its function. Also, the default toaster is strongly cross-tied with Heap access, with somewhat unclear code relations (headers, function locations and calls, etc.) that are not quite good logically structured and ask to be straightened out.>In my understanding, we want to be able to>1. Access data from a toasted object one slice at a time, by using>knowledge of the structure>2. If toasted data is updated, then update a minimum number of>slices(s), without rewriting the existing slices>3. If toasted data is expanded, then allownew slices to be appended to>the object without rewriting the existing slicesThere are two main ideas behind Pluggable Toaster patch -First - to provide an extensible API for all Postgres developers, tobe able to develop and plug in custom toasters as independentextensions for different data types and columns, to use differenttoast strategies, access and compression methods, and so on;Second - to refactor current Toast functionality, to improve Toastcode structure and make it more logically structured andunderstandable, to 'detach' default ('generic', as it is currentlynamed, or maybe the best naming for it to be 'heap') toaster from DBMScore code, route it through new API and hide all existing internalspecific Toast functionality behind new API.All the points you mentioned are made available for development bythis patch (and, actually, some are being developed - in thebytea_appendable_toaster part of this patch or jSONb toaster by NikitaGlukhov, he could provide much better explanation on this topic).>> Modification of current toaster for all tasks and cases looks too>> complex, moreover, it  will not works for  custom data types. Postgres>> is an extensible database,  why not to extent its extensibility even>> further, to have pluggable TOAST! We  propose an idea to separate>> toaster from  heap using  toaster API similar to table AM API etc.>> Following patches are applicable over patch in [1]>ISTM that we would want the toast algorithm to be associated with the>datatype, not the column?>Can you explain your thinking?This possibility is considered for future development.>We already have Expanded toast format, in-memory, which was designed>specifically to allow us to access sub-structure of the datatype>in-memory. So I was expecting to see an Expanded, on-disk, toast>format that roughly matched that concept, since Tom has already shown>us the way. (varatt_expanded). This would be usable by both JSON and>PostGIS.The main disadvantage is that it does not suppose either usage of anyother toasting strategies, or compressions methods except plgz andlz4.>Some other thoughts:>I imagine the data type might want to keep some kind of dictionary>inside the main toast pointer, so we could make allowance for some>optional datatype-specific private area in the toast pointer itself,>allowing a mix of inline and out-of-line data, and/or a table of>contents to the slices.It is partly implemented in jSONb custom Toaster, as I mentionedabove, and also could be considered for future improvement of existingToaster as an extension.>I'm thinking could also tackle these things at the same time:>* We want to expand TOAST to 64-bit pointers, so we can have more>pointers in a tableThis issue is being discussed but not currently implemented, it was considered as one of the possible future improvements.>* We want to avoid putting the data length into the toast pointer, so>we can allow the toasted data to be expanded without rewriting>everything (to avoid O(N^2) cost)May I correct you - actual relation is O(N), not O(N^2).Currently data length is stored outside customized toaster data, inthe varatt_custom structure that is supposed to be used by all custom(extended) toasters. Data that is specific to some custom Toasted willbe stored inside va_toasterdata structure.Looking forward to your thoughts on our work.--Best regards,Nikita A. MalakhovOn Wed, Jan 5, 2022 at 5:46 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Thu, 30 Dec 2021 at 16:40, Teodor Sigaev <teodor@sigaev.ru> wrote:\n\n> We are working on custom toaster for JSONB [1], because current TOAST is\n> universal for any data type and because of that it has some disadvantages:\n>     - \"one toast fits all\"  may be not the best solution for particular\n>       type or/and use cases\n>     - it doesn't know the internal structure of data type, so it  cannot\n>       choose an optimal toast strategy\n>     - it can't  share common parts between different rows and even\n>       versions of rows\n\nAgreed, Oleg has made some very clear analysis of the value of having\na higher degree of control over toasting from within the datatype.\n\nIn my understanding, we want to be able to\n1. Access data from a toasted object one slice at a time, by using\nknowledge of the structure\n2. If toasted data is updated, then update a minimum number of\nslices(s), without rewriting the existing slices\n3. If toasted data is expanded, then allownew slices to be appended to\nthe object without rewriting the existing slices\n\n> Modification of current toaster for all tasks and cases looks too\n> complex, moreover, it  will not works for  custom data types. Postgres\n> is an extensible database,  why not to extent its extensibility even\n> further, to have pluggable TOAST! We  propose an idea to separate\n> toaster from  heap using  toaster API similar to table AM API etc.\n> Following patches are applicable over patch in [1]\n\nISTM that we would want the toast algorithm to be associated with the\ndatatype, not the column?\nCan you explain your thinking?\n\nWe already have Expanded toast format, in-memory, which was designed\nspecifically to allow us to access sub-structure of the datatype\nin-memory. So I was expecting to see an Expanded, on-disk, toast\nformat that roughly matched that concept, since Tom has already shown\nus the way. (varatt_expanded). This would be usable by both JSON and\nPostGIS.\n\n\nSome other thoughts:\n\nI imagine the data type might want to keep some kind of dictionary\ninside the main toast pointer, so we could make allowance for some\noptional datatype-specific private area in the toast pointer itself,\nallowing a mix of inline and out-of-line data, and/or a table of\ncontents to the slices.\n\nI'm thinking could also tackle these things at the same time:\n* We want to expand TOAST to 64-bit pointers, so we can have more\npointers in a table\n* We want to avoid putting the data length into the toast pointer, so\nwe can allow the toasted data to be expanded without rewriting\neverything (to avoid O(N^2) cost)\n\n--\nSimon Riggs                http://www.EnterpriseDB.com/", "msg_date": "Thu, 13 Jan 2022 15:25:53 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "\n> In my understanding, we want to be able to\n> 1. Access data from a toasted object one slice at a time, by using\n> knowledge of the structure\n> 2. If toasted data is updated, then update a minimum number of\n> slices(s), without rewriting the existing slices\n> 3. If toasted data is expanded, then allownew slices to be appended to\n> the object without rewriting the existing slices\n\nThere are more options:\n1 share common parts between not only versions of row but between all \nrows in a column. Seems strange but examples:\n - urls often have a common prefix and so storing in a prefix tree (as\n SP-GiST does) allows significantly decrease storage size\n - the same for json - it's often use case with common part of its\n hierarchical structure\n - one more usecase for json. If json use only a few schemes\n (structure) it's possible to store in toast storage only values and\n don't store keys and structure\n2 Current toast storage stores chunks in heap accesses method and to \nprovide fast access by toast id it makes an index. Ideas:\n - store chunks directly in btree tree, pgsql's btree already has an\n INCLUDE columns, so, chunks and visibility data will be stored only\n in leaf pages. Obviously it reduces number of disk's access for\n \"untoasting\".\n - use another access method for chunk storage\n\n> ISTM that we would want the toast algorithm to be associated with the\n> datatype, not the column?\n> Can you explain your thinking?\nHm. I'll try to explain my motivation.\n1) Datatype could have more than one suitable toasters. For different\n usecases: fast retrieving, compact storage, fast update etc. As I\n told above, for jsonb there are several optimal strategies for\n toasting: for values with a few different structures, for close to\n hierarchical structures, for values with different parts by access\n mode (easy to imagine json with some keys used for search and some\n keys only for output to user)\n2) Toaster could be designed to work with different data type. Suggested\n appendable toaster is designed to work with bytea but could work with\n text\n\nLooking on this point I have doubts where to store connection between \ntoaster and datatype. If we add toasteroid to pg_type how to deal with \nseveral toaster for one datatype? (And we could want to has different \ntoaster on one table!) If we add typoid to pg_toaster then how it will \nwork with several datatypes? An idea to add a new many-to-many \nconnection table seems workable but here there are another questions, \nsuch as will any toaster work with any table access method?\n\nTo resolve this bundle of question we propose validate() method of \ntoaster, which should be called during DDL operation, i.e. toaster is \nassigned to column or column's datatype is changed.\n\nMore thought:\nNow postgres has two options for column: storage and compression and now \nwe add toaster. For me it seems too redundantly. Seems, storage should \nbe binary value: inplace (plain as now) and toastable. All other \nvariation such as toast limit, compression enabling, compression kind \nshould be an per-column option for toaster (that's why we suggest valid \ntoaster oid for any column with varlena/toastable datatype). It looks \nlike a good abstraction but we will have a problem with backward \ncompatibility and I'm afraid I can't implement it very fast.\n\n\n\n> \n> We already have Expanded toast format, in-memory, which was designed\n> specifically to allow us to access sub-structure of the datatype\n> in-memory. So I was expecting to see an Expanded, on-disk, toast\n> format that roughly matched that concept, since Tom has already shown\n> us the way. (varatt_expanded). This would be usable by both JSON and\n> PostGIS.\nHm, I don't understand. varatt_custom has variable-length tail which \ntoaster could use it by any way, appandable toaster use it to store \nappended tail.\n\n> \n> \n> Some other thoughts:\n> \n> I imagine the data type might want to keep some kind of dictionary\n> inside the main toast pointer, so we could make allowance for some\n> optional datatype-specific private area in the toast pointer itself,\n> allowing a mix of inline and out-of-line data, and/or a table of\n> contents to the slices.\n> \n> I'm thinking could also tackle these things at the same time:\n> * We want to expand TOAST to 64-bit pointers, so we can have more\n> pointers in a table\n> * We want to avoid putting the data length into the toast pointer, so\n> we can allow the toasted data to be expanded without rewriting\n> everything (to avoid O(N^2) cost)\nRight\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/\n\n\n", "msg_date": "Fri, 14 Jan 2022 21:41:55 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "I'd like to ask your opinion for next small questions\n\n1) May be, we could add toasteroid to pg_type to for default toaster for \ndatatype. ALTER TYPE type SET DEFAULT TOASTER toaster;\n\n2) The name of default toaster is deftoaster, which was choosen at \nnight, may be heap_toaster is better? heap because default toaster \nstores chunks in heap table.\n\nThank you!\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/\n\n\n", "msg_date": "Fri, 14 Jan 2022 21:47:04 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "\n\nOn 1/14/22 19:41, Teodor Sigaev wrote:\n> \n>> In my understanding, we want to be able to\n>> 1. Access data from a toasted object one slice at a time, by using\n>> knowledge of the structure\n>> 2. If toasted data is updated, then update a minimum number of\n>> slices(s), without rewriting the existing slices\n>> 3. If toasted data is expanded, then allownew slices to be appended to\n>> the object without rewriting the existing slices\n> \n> There are more options:\n> 1 share common parts between not only versions of row but between all \n> rows in a column. Seems strange but examples:\n>   - urls often have a common prefix and so storing in a prefix tree (as\n>     SP-GiST does) allows significantly decrease storage size\n>   - the same for json - it's often use case with common part of its\n>     hierarchical structure\n>   - one more usecase for json. If json use only a few schemes\n>     (structure) it's possible to store in toast storage only values and\n>     don't store keys and structure\n\nThis sounds interesting, but very much like column compression, which \nwas proposed some time ago. If we haven't made much progrees with that \npatch (AFAICS), what's the likelihood we'll succeed here, when it's \ncombined with yet more complexity?\n\nMaybe doing that kind of compression in TOAST is somehow simpler, but I \ndon't see it.\n\n> 2 Current toast storage stores chunks in heap accesses method and to \n> provide fast access by toast id it makes an index. Ideas:\n>   - store chunks directly in btree tree, pgsql's btree already has an\n>     INCLUDE columns, so, chunks and visibility data will be stored only\n>     in leaf pages. Obviously it reduces number of disk's access for\n>     \"untoasting\".\n>   - use another access method for chunk storage\n> \n\nMaybe, but that probably requires more thought - e.g. btree requires the \nvalues to be less than 1/3 page, so I wonder how would that play with \ntoasting of values.\n\n>> ISTM that we would want the toast algorithm to be associated with the\n>> datatype, not the column?\n>> Can you explain your thinking?\n> Hm. I'll try to explain my motivation.\n> 1) Datatype could have more than one suitable toasters. For different\n>    usecases: fast retrieving, compact storage, fast update etc. As I\n>    told   above, for jsonb there are several optimal strategies for\n>    toasting:   for values with a few different structures, for close to\n>    hierarchical structures,  for values with different parts by access\n>    mode (easy to imagine json with some keys used for search and some\n>    keys only for   output to user)\n> 2) Toaster could be designed to work with different data type. Suggested\n>    appendable toaster is designed to work with bytea but could work with\n>    text\n> \n> Looking on this point I have doubts where to store connection between \n> toaster and datatype. If we add toasteroid to pg_type how to deal with \n> several toaster for one datatype? (And we could want to has different \n> toaster on one table!) If we add typoid to pg_toaster then how it will \n> work with several datatypes? An idea to add a new many-to-many \n> connection table seems workable but here there are another questions, \n> such as will any toaster work with any table access method?\n> \n> To resolve this bundle of question we propose validate() method of \n> toaster, which should be called during DDL operation, i.e. toaster is \n> assigned to column or column's datatype is changed.\n> \n\nSeems you'd need a mapping table, to allow M:N mapping between types and \ntoasters, linking it to all \"compatible\" types. It's not clear to me how \nwould this work with custom data types, domains etc.\n\nAlso, what happens to existing values when you change the toaster? What \nif the toasters don't use the same access method to store the chunks \n(heap vs. btree)? And so on.\n\n> More thought:\n> Now postgres has two options for column: storage and compression and now \n> we add toaster. For me it seems too redundantly. Seems, storage should \n> be binary value: inplace (plain as now) and toastable. All other \n> variation such as toast limit, compression enabling, compression kind \n> should be an per-column option for toaster (that's why we suggest valid \n> toaster oid for any column with varlena/toastable datatype). It looks \n> like a good abstraction but we will have a problem with backward \n> compatibility and I'm afraid I can't implement it very fast.\n> \n\nSo you suggest we move all of this to toaster? I'd say -1 to that, \nbecause it makes it much harder to e.g. add custom compression method, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 17 Jan 2022 20:23:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n>This sounds interesting, but very much like column compression, which\n>was proposed some time ago. If we haven't made much progrees with that\n>patch (AFAICS), what's the likelihood we'll succeed here, when it's\n>combined with yet more complexity?\nThe main concern is that this patch provides open API for Toast\nfunctionality\nand new Toasters could be written as extensions and plugged in instead of\ndefault one, for any column (or datatype). It was not possible before,\nbecause\nToast functionality was part of the Postgres core code and was not meant to\nbe modified.\n\n>Maybe doing that kind of compression in TOAST is somehow simpler, but I\n>don't see it.\n[Custom ]Toaster extension itself is not restricted to only toast data, it\ncould be\nused for compression, encryption, etc, just name it - any case when data\nmeant\nto be transformed in some complicated way before being stored and\n(possibly)\ntransformed backwards while being selected from the table, along with some\nsimpler but not so obvious transformations like removing common parts shared\nby all data in column before storing it and restoring column value to full\nduring\nselection.\n\n>Seems you'd need a mapping table, to allow M:N mapping between types and\n>toasters, linking it to all \"compatible\" types. It's not clear to me how\n>would this work with custom data types, domains etc.\nAny suitable [custom] Toaster could be plugged in for any table column,\nor [duscussible] for datatype and assigned by default to the according\ncolumn\nfor any table using this datatype.\n\n>Also, what happens to existing values when you change the toaster? What\n>if the toasters don't use the same access method to store the chunks\n>(heap vs. btree)? And so on.\nFor newer data there is no problem - it will be toasted by the newly\nassigned Toaster.\nFor detoasting - the Toaster ID is stored in Toast pointer and in principle\nall data\ncould be detoasted by the according toaster if it is available. But this is\nthe topic\nfor discussion and we are open for proposals, because there are possible\ncases\nwhere older Toaster is not available - the older used Toaster extension is\nnot installed\nat all or was uninstalled, it was upgraded to the newer version. Currently\nwe see\ntwo ways of handling this case - to restrict changing the toaster, and to\nre-toast\nall toasted data which could be very heavy if Toaster is assigned to a\nwidely used\ndatatype, and we're looking forward to any ideas.\n\n>So you suggest we move all of this to toaster? I'd say -1 to that,\n>because it makes it much harder to e.g. add custom compression method, etc.\nOriginal compression methods, etc. are not affected by this patch.\n\nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Mon, Jan 17, 2022 at 10:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 1/14/22 19:41, Teodor Sigaev wrote:\n> >\n> >> In my understanding, we want to be able to\n> >> 1. Access data from a toasted object one slice at a time, by using\n> >> knowledge of the structure\n> >> 2. If toasted data is updated, then update a minimum number of\n> >> slices(s), without rewriting the existing slices\n> >> 3. If toasted data is expanded, then allownew slices to be appended to\n> >> the object without rewriting the existing slices\n> >\n> > There are more options:\n> > 1 share common parts between not only versions of row but between all\n> > rows in a column. Seems strange but examples:\n> > - urls often have a common prefix and so storing in a prefix tree (as\n> > SP-GiST does) allows significantly decrease storage size\n> > - the same for json - it's often use case with common part of its\n> > hierarchical structure\n> > - one more usecase for json. If json use only a few schemes\n> > (structure) it's possible to store in toast storage only values and\n> > don't store keys and structure\n>\n> This sounds interesting, but very much like column compression, which\n> was proposed some time ago. If we haven't made much progrees with that\n> patch (AFAICS), what's the likelihood we'll succeed here, when it's\n> combined with yet more complexity?\n>\n> Maybe doing that kind of compression in TOAST is somehow simpler, but I\n> don't see it.\n>\n> > 2 Current toast storage stores chunks in heap accesses method and to\n> > provide fast access by toast id it makes an index. Ideas:\n> > - store chunks directly in btree tree, pgsql's btree already has an\n> > INCLUDE columns, so, chunks and visibility data will be stored only\n> > in leaf pages. Obviously it reduces number of disk's access for\n> > \"untoasting\".\n> > - use another access method for chunk storage\n> >\n>\n> Maybe, but that probably requires more thought - e.g. btree requires the\n> values to be less than 1/3 page, so I wonder how would that play with\n> toasting of values.\n>\n> >> ISTM that we would want the toast algorithm to be associated with the\n> >> datatype, not the column?\n> >> Can you explain your thinking?\n> > Hm. I'll try to explain my motivation.\n> > 1) Datatype could have more than one suitable toasters. For different\n> > usecases: fast retrieving, compact storage, fast update etc. As I\n> > told above, for jsonb there are several optimal strategies for\n> > toasting: for values with a few different structures, for close to\n> > hierarchical structures, for values with different parts by access\n> > mode (easy to imagine json with some keys used for search and some\n> > keys only for output to user)\n> > 2) Toaster could be designed to work with different data type. Suggested\n> > appendable toaster is designed to work with bytea but could work with\n> > text\n> >\n> > Looking on this point I have doubts where to store connection between\n> > toaster and datatype. If we add toasteroid to pg_type how to deal with\n> > several toaster for one datatype? (And we could want to has different\n> > toaster on one table!) If we add typoid to pg_toaster then how it will\n> > work with several datatypes? An idea to add a new many-to-many\n> > connection table seems workable but here there are another questions,\n> > such as will any toaster work with any table access method?\n> >\n> > To resolve this bundle of question we propose validate() method of\n> > toaster, which should be called during DDL operation, i.e. toaster is\n> > assigned to column or column's datatype is changed.\n> >\n>\n> Seems you'd need a mapping table, to allow M:N mapping between types and\n> toasters, linking it to all \"compatible\" types. It's not clear to me how\n> would this work with custom data types, domains etc.\n>\n> Also, what happens to existing values when you change the toaster? What\n> if the toasters don't use the same access method to store the chunks\n> (heap vs. btree)? And so on.\n>\n> > More thought:\n> > Now postgres has two options for column: storage and compression and now\n> > we add toaster. For me it seems too redundantly. Seems, storage should\n> > be binary value: inplace (plain as now) and toastable. All other\n> > variation such as toast limit, compression enabling, compression kind\n> > should be an per-column option for toaster (that's why we suggest valid\n> > toaster oid for any column with varlena/toastable datatype). It looks\n> > like a good abstraction but we will have a problem with backward\n> > compatibility and I'm afraid I can't implement it very fast.\n> >\n>\n> So you suggest we move all of this to toaster? I'd say -1 to that,\n> because it makes it much harder to e.g. add custom compression method, etc.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nHi,>This sounds interesting, but very much like column compression, which>was proposed some time ago. If we haven't made much progrees with that>patch (AFAICS), what's the likelihood we'll succeed here, when it's>combined with yet more complexity?The main concern is that this patch provides open API for Toast functionality and new Toasters could be written as extensions and plugged in instead of default one, for any column (or datatype). It was not possible before, becauseToast functionality was part of the Postgres core code and was not meant to be modified.>Maybe doing that kind of compression in TOAST is somehow simpler, but I>don't see it.[Custom ]Toaster extension itself is not restricted to only toast data, it could be used for compression, encryption, etc, just name it - any case when data meant to be transformed in some complicated way before being stored and (possibly) transformed backwards while being selected from the table, along with some simpler but not so obvious transformations like removing common parts sharedby all data in column before storing it and restoring column value to full duringselection.>Seems you'd need a mapping table, to allow M:N mapping between types and>toasters, linking it to all \"compatible\" types. It's not clear to me how>would this work with custom data types, domains etc.Any suitable [custom] Toaster could be plugged in for any table column, or [duscussible] for datatype and assigned by default to the according column for any table using this datatype.>Also, what happens to existing values when you change the toaster? What>if the toasters don't use the same access method to store the chunks>(heap vs. btree)? And so on.For newer data there is no problem - it will be toasted by the newly assigned Toaster.For detoasting - the Toaster ID is stored in Toast pointer and in principle all datacould be detoasted by the according toaster if it is available. But this is the topic for discussion and we are open for proposals, because there are possible cases where older Toaster is not available - the older used Toaster extension is not installed at all or was uninstalled, it was upgraded to the newer version. Currently we see two ways of handling this case - to restrict changing the toaster, and to re-toast all toasted data which could be very heavy if Toaster is assigned to a widely used datatype, and we're looking forward to any ideas.>So you suggest we move all of this to toaster? I'd say -1 to that,>because it makes it much harder to e.g. add custom compression method, etc.Original compression methods, etc. are not affected by this patch.Regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/On Mon, Jan 17, 2022 at 10:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 1/14/22 19:41, Teodor Sigaev wrote:\n> \n>> In my understanding, we want to be able to\n>> 1. Access data from a toasted object one slice at a time, by using\n>> knowledge of the structure\n>> 2. If toasted data is updated, then update a minimum number of\n>> slices(s), without rewriting the existing slices\n>> 3. If toasted data is expanded, then allownew slices to be appended to\n>> the object without rewriting the existing slices\n> \n> There are more options:\n> 1 share common parts between not only versions of row but between all \n> rows in a column. Seems strange but examples:\n>    - urls often have a common prefix and so storing in a prefix tree (as\n>      SP-GiST does) allows significantly decrease storage size\n>    - the same for json - it's often use case with common part of its\n>      hierarchical structure\n>    - one more usecase for json. If json use only a few schemes\n>      (structure) it's possible to store in toast storage only values and\n>      don't store keys and structure\n\nThis sounds interesting, but very much like column compression, which \nwas proposed some time ago. If we haven't made much progrees with that \npatch (AFAICS), what's the likelihood we'll succeed here, when it's \ncombined with yet more complexity?\n\nMaybe doing that kind of compression in TOAST is somehow simpler, but I \ndon't see it.\n\n> 2 Current toast storage stores chunks in heap accesses method and to \n> provide fast access by toast id it makes an index. Ideas:\n>    - store chunks directly in btree tree, pgsql's btree already has an\n>      INCLUDE columns, so, chunks and visibility data will be stored only\n>      in leaf pages. Obviously it reduces number of disk's access for\n>      \"untoasting\".\n>    - use another access method for chunk storage\n> \n\nMaybe, but that probably requires more thought - e.g. btree requires the \nvalues to be less than 1/3 page, so I wonder how would that play with \ntoasting of values.\n\n>> ISTM that we would want the toast algorithm to be associated with the\n>> datatype, not the column?\n>> Can you explain your thinking?\n> Hm. I'll try to explain my motivation.\n> 1) Datatype could have more than one suitable toasters. For different\n>     usecases: fast retrieving, compact storage, fast update etc. As I\n>     told   above, for jsonb there are several optimal strategies for\n>     toasting:   for values with a few different structures, for close to\n>     hierarchical structures,  for values with different parts by access\n>     mode (easy to imagine json with some keys used for search and some\n>     keys only for   output to user)\n> 2) Toaster could be designed to work with different data type. Suggested\n>     appendable toaster is designed to work with bytea but could work with\n>     text\n> \n> Looking on this point I have doubts where to store connection between \n> toaster and datatype. If we add toasteroid to pg_type how to deal with \n> several toaster for one datatype? (And we could want to has different \n> toaster on one table!) If we add typoid to pg_toaster then how it will \n> work with several datatypes? An idea to add a new many-to-many \n> connection table seems workable but here there are another questions, \n> such as will any toaster work with any table access method?\n> \n> To resolve this bundle of question we propose validate() method of \n> toaster, which should be called during DDL operation, i.e. toaster is \n> assigned to column or column's datatype is changed.\n> \n\nSeems you'd need a mapping table, to allow M:N mapping between types and \ntoasters, linking it to all \"compatible\" types. It's not clear to me how \nwould this work with custom data types, domains etc.\n\nAlso, what happens to existing values when you change the toaster? What \nif the toasters don't use the same access method to store the chunks \n(heap vs. btree)? And so on.\n\n> More thought:\n> Now postgres has two options for column: storage and compression and now \n> we add toaster. For me it seems too redundantly. Seems, storage should \n> be binary value: inplace (plain as now) and toastable. All other \n> variation such as toast limit, compression enabling, compression kind \n> should be an per-column option for toaster (that's why we suggest valid \n> toaster oid for any column with varlena/toastable datatype). It looks \n> like a good abstraction but we will have a problem with backward \n> compatibility and I'm afraid I can't implement it very fast.\n> \n\nSo you suggest we move all of this to toaster? I'd say -1 to that, \nbecause it makes it much harder to e.g. add custom compression method, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 18 Jan 2022 01:25:56 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\n> Maybe doing that kind of compression in TOAST is somehow simpler, but I \n> don't see it.\nSeems, in ideal world, compression should be inside toaster.\n\n> \n>> 2 Current toast storage stores chunks in heap accesses method and to \n>> provide fast access by toast id it makes an index. Ideas:\n>>    - store chunks directly in btree tree, pgsql's btree already has an\n>>      INCLUDE columns, so, chunks and visibility data will be stored only\n>>      in leaf pages. Obviously it reduces number of disk's access for\n>>      \"untoasting\".\n>>    - use another access method for chunk storage\n>>\n> \n> Maybe, but that probably requires more thought - e.g. btree requires the \n> values to be less than 1/3 page, so I wonder how would that play with \n> toasting of values.\nThat's ok, because chunk size is 2000 bytes right now and its could be \nsaved.\n> \n\n> Seems you'd need a mapping table, to allow M:N mapping between types and \n> toasters, linking it to all \"compatible\" types. It's not clear to me how \n> would this work with custom data types, domains etc.\nIf toaster will look into internal structure then it should know type's \nbinary format. So, new custom types have a little chance to work with \nold custom toaster. Default toaster works with any types.\n> \n> Also, what happens to existing values when you change the toaster? What \n> if the toasters don't use the same access method to store the chunks \n> (heap vs. btree)? And so on.\n\nvatatt_custom contains an oid of toaster and toaster is not allowed to \ndelete (at least, in suggested patches). So, if column's toaster has \nbeen changed then old values will be detoasted by toaster pointed in \nvaratt_custom structure, not in column definition. This is very similar \nto storage attribute works: we we alter storage attribute only new \nvalues will be stored with pointed storage type.\n\n> \n>> More thought:\n>> Now postgres has two options for column: storage and compression and \n>> now we add toaster. For me it seems too redundantly. Seems, storage \n>> should be binary value: inplace (plain as now) and toastable. All \n>> other variation such as toast limit, compression enabling, compression \n>> kind should be an per-column option for toaster (that's why we suggest \n>> valid toaster oid for any column with varlena/toastable datatype). It \n>> looks like a good abstraction but we will have a problem with backward \n>> compatibility and I'm afraid I can't implement it very fast.\n>>\n> \n> So you suggest we move all of this to toaster? I'd say -1 to that, \n> because it makes it much harder to e.g. add custom compression method, etc.\nHmm, I suggested to leave only toaster at upper level. Compression kind \ncould be chosen in toaster's options (not implemented yet) or even make \nan API interface to compression to make it configurable. Right now, \nmodule developer could not implement a module with new compression \nmethod and it is a disadvantage.\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/\n\n\n", "msg_date": "Tue, 18 Jan 2022 17:56:30 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "\n\nOn 1/18/22 15:56, Teodor Sigaev wrote:\n> Hi!\n> \n>> Maybe doing that kind of compression in TOAST is somehow simpler, but\n>> I don't see it.\n> Seems, in ideal world, compression should be inside toaster.\n> \n\nI'm not convinced that's universally true. Yes, I'm sure certain TOAST\nimplementations would benefit from tighter control over compression, but\ndoes that imply compression and toast are redundant? I doubt that,\nbecause we compress non-toasted types too, for example. And layering has\na value too, as makes it easier to replace the pieces.\n\n>>\n>>> 2 Current toast storage stores chunks in heap accesses method and to\n>>> provide fast access by toast id it makes an index. Ideas:\n>>>    - store chunks directly in btree tree, pgsql's btree already has an\n>>>      INCLUDE columns, so, chunks and visibility data will be stored only\n>>>      in leaf pages. Obviously it reduces number of disk's access for\n>>>      \"untoasting\".\n>>>    - use another access method for chunk storage\n>>>\n>>\n>> Maybe, but that probably requires more thought - e.g. btree requires\n>> the values to be less than 1/3 page, so I wonder how would that play\n>> with toasting of values.\n> That's ok, because chunk size is 2000 bytes right now and its could be\n> saved.\n>>\n\nPerhaps. My main point is that we should not be making too many radical\nchanges at once - it makes it much harder to actually get anything done.\nSo yeah, doing TOAST through IOT might be interesting, but I'd leave\nthat for a separate patch.\n\n> \n>> Seems you'd need a mapping table, to allow M:N mapping between types\n>> and toasters, linking it to all \"compatible\" types. It's not clear to\n>> me how would this work with custom data types, domains etc.\n> If toaster will look into internal structure then it should know type's\n> binary format. So, new custom types have a little chance to work with\n> old custom toaster. Default toaster works with any types.\n\nThe question is what happens when you combine data type with a toaster\nthat is not designed for that type. I mean, imagine you have a JSONB\ntoaster and you set it for a bytea column. Naive implementation will\njust crash, because it'll try to process bytea as if it was JSONB.\n\nIt seems better to prevent such incompatible combinations and restrict\neach toaster to just compatible data types, and the mapping table\n(linking toaster and data types) seems a way to do that.\n\nHowever, it seems toasters are either generic (agnostic to data types,\ntreating everything as bytea) or specialized. I doubt any specialized\ntoaster can reasonably support multiple data types, so maybe each\ntoaster can have just one \"compatible type\" OID. If it's invalid, it'd\nbe \"generic\" and otherwise it's useful for that type and types derived\nfrom it (e.g. domains).\n\nSo you'd have the toaster OID in two places:\n\npg_type.toaster_oid - default toaster for the type\npg_attribute.toaster_oid - current toaster for this column\n\nand then you'd have\n\npg_toaster.typid - type this toaster handles (or InvalidOid for generic)\n\n\n>>\n>> Also, what happens to existing values when you change the toaster?\n>> What if the toasters don't use the same access method to store the\n>> chunks (heap vs. btree)? And so on.\n> \n> vatatt_custom contains an oid of toaster and toaster is not allowed to\n> delete (at least, in suggested patches). So, if column's toaster has\n> been changed then old values will be detoasted  by toaster pointed in\n> varatt_custom structure, not in column definition. This is very similar\n> to storage attribute works: we we alter storage attribute only new\n> values will be stored with pointed storage type.\n> \n\nIIRC we do this for compression methods, right?\n\n>>\n>>> More thought:\n>>> Now postgres has two options for column: storage and compression and\n>>> now we add toaster. For me it seems too redundantly. Seems, storage\n>>> should be binary value: inplace (plain as now) and toastable. All\n>>> other variation such as toast limit, compression enabling,\n>>> compression kind should be an per-column option for toaster (that's\n>>> why we suggest valid toaster oid for any column with\n>>> varlena/toastable datatype). It looks like a good abstraction but we\n>>> will have a problem with backward compatibility and I'm afraid I\n>>> can't implement it very fast.\n>>>\n>>\n>> So you suggest we move all of this to toaster? I'd say -1 to that,\n>> because it makes it much harder to e.g. add custom compression method,\n>> etc.\n> Hmm, I suggested to leave only toaster at upper level. Compression kind\n> could be chosen in toaster's options (not implemented yet) or even make\n> an API interface to compression to make it configurable. Right now,\n> module developer could not implement a module with new compression\n> method and it is a disadvantage.\n\nIf you have to implement custom toaster to implement custom compression\nmethod, doesn't that make things more complex? You'd have to solve all\nthe issues for custom compression methods and also all issues for custom\ntoaster. Also, what if you want to just compress the column, not toast?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 18 Jan 2022 17:05:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n>I'm not convinced that's universally true. Yes, I'm sure certain TOAST\n>implementations would benefit from tighter control over compression, but\n>does that imply compression and toast are redundant? I doubt that,\n>because we compress non-toasted types too, for example. And layering has\n>a value too, as makes it easier to replace the pieces.\nNot exactly. It is a mean to control TOAST itself without changing the core\neach time you want to change Toast strategy or method. Compression is\njust an example. And no Toasters are available without the patch proposed,\nthere is the one and only.\n\n>Perhaps. My main point is that we should not be making too many radical\n>changes at once - it makes it much harder to actually get anything done.\n>So yeah, doing TOAST through IOT might be interesting, but I'd leave\n>that for a separate patch.\nThat's why 4 distinct patches with incremental changes were proposed -\n1) just new Toaster API with some necessary core changes required by the\nAPI;\n2) default toaster routed via new API (but all it's functions are not\naffected\nand dummy toaster extension as an example);\n3) 1+2+some refactoring and versioning;\n4) extension module for bytea columns.\nToast through IOT is a topic for discussion but does not seem to give a\nmajor\nadvantage over existing storage method, according to tests.\n\n>It seems better to prevent such incompatible combinations and restrict\n>each toaster to just compatible data types, and the mapping table\n>(linking toaster and data types) seems a way to do that.\nTo handle this case a validate function (toastervalidate_function) is\nproposed\nin the TsrRoutine structure.\n\n>If you have to implement custom toaster to implement custom compression\n>method, doesn't that make things more complex? You'd have to solve all\n>the issues for custom compression methods and also all issues for custom\n>toaster. Also, what if you want to just compress the column, not toast?\nDefault compression is restricted to 2 compression methods, all other means\nrequire extensions. Also, the name Toaster is a little bit misleading\nbecause\nit intends that data is being sliced, but it is not always the case, to be\ntoasted\na piece of bread must not necessarily be sliced.\n\nRegards,\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Tue, Jan 18, 2022 at 7:06 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 1/18/22 15:56, Teodor Sigaev wrote:\n> > Hi!\n> >\n> >> Maybe doing that kind of compression in TOAST is somehow simpler, but\n> >> I don't see it.\n> > Seems, in ideal world, compression should be inside toaster.\n> >\n>\n> I'm not convinced that's universally true. Yes, I'm sure certain TOAST\n> implementations would benefit from tighter control over compression, but\n> does that imply compression and toast are redundant? I doubt that,\n> because we compress non-toasted types too, for example. And layering has\n> a value too, as makes it easier to replace the pieces.\n>\n> >>\n> >>> 2 Current toast storage stores chunks in heap accesses method and to\n> >>> provide fast access by toast id it makes an index. Ideas:\n> >>> - store chunks directly in btree tree, pgsql's btree already has an\n> >>> INCLUDE columns, so, chunks and visibility data will be stored\n> only\n> >>> in leaf pages. Obviously it reduces number of disk's access for\n> >>> \"untoasting\".\n> >>> - use another access method for chunk storage\n> >>>\n> >>\n> >> Maybe, but that probably requires more thought - e.g. btree requires\n> >> the values to be less than 1/3 page, so I wonder how would that play\n> >> with toasting of values.\n> > That's ok, because chunk size is 2000 bytes right now and its could be\n> > saved.\n> >>\n>\n> Perhaps. My main point is that we should not be making too many radical\n> changes at once - it makes it much harder to actually get anything done.\n> So yeah, doing TOAST through IOT might be interesting, but I'd leave\n> that for a separate patch.\n>\n> >\n> >> Seems you'd need a mapping table, to allow M:N mapping between types\n> >> and toasters, linking it to all \"compatible\" types. It's not clear to\n> >> me how would this work with custom data types, domains etc.\n> > If toaster will look into internal structure then it should know type's\n> > binary format. So, new custom types have a little chance to work with\n> > old custom toaster. Default toaster works with any types.\n>\n> The question is what happens when you combine data type with a toaster\n> that is not designed for that type. I mean, imagine you have a JSONB\n> toaster and you set it for a bytea column. Naive implementation will\n> just crash, because it'll try to process bytea as if it was JSONB.\n>\n> It seems better to prevent such incompatible combinations and restrict\n> each toaster to just compatible data types, and the mapping table\n> (linking toaster and data types) seems a way to do that.\n>\n> However, it seems toasters are either generic (agnostic to data types,\n> treating everything as bytea) or specialized. I doubt any specialized\n> toaster can reasonably support multiple data types, so maybe each\n> toaster can have just one \"compatible type\" OID. If it's invalid, it'd\n> be \"generic\" and otherwise it's useful for that type and types derived\n> from it (e.g. domains).\n>\n> So you'd have the toaster OID in two places:\n>\n> pg_type.toaster_oid - default toaster for the type\n> pg_attribute.toaster_oid - current toaster for this column\n>\n> and then you'd have\n>\n> pg_toaster.typid - type this toaster handles (or InvalidOid for generic)\n>\n>\n> >>\n> >> Also, what happens to existing values when you change the toaster?\n> >> What if the toasters don't use the same access method to store the\n> >> chunks (heap vs. btree)? And so on.\n> >\n> > vatatt_custom contains an oid of toaster and toaster is not allowed to\n> > delete (at least, in suggested patches). So, if column's toaster has\n> > been changed then old values will be detoasted by toaster pointed in\n> > varatt_custom structure, not in column definition. This is very similar\n> > to storage attribute works: we we alter storage attribute only new\n> > values will be stored with pointed storage type.\n> >\n>\n> IIRC we do this for compression methods, right?\n>\n> >>\n> >>> More thought:\n> >>> Now postgres has two options for column: storage and compression and\n> >>> now we add toaster. For me it seems too redundantly. Seems, storage\n> >>> should be binary value: inplace (plain as now) and toastable. All\n> >>> other variation such as toast limit, compression enabling,\n> >>> compression kind should be an per-column option for toaster (that's\n> >>> why we suggest valid toaster oid for any column with\n> >>> varlena/toastable datatype). It looks like a good abstraction but we\n> >>> will have a problem with backward compatibility and I'm afraid I\n> >>> can't implement it very fast.\n> >>>\n> >>\n> >> So you suggest we move all of this to toaster? I'd say -1 to that,\n> >> because it makes it much harder to e.g. add custom compression method,\n> >> etc.\n> > Hmm, I suggested to leave only toaster at upper level. Compression kind\n> > could be chosen in toaster's options (not implemented yet) or even make\n> > an API interface to compression to make it configurable. Right now,\n> > module developer could not implement a module with new compression\n> > method and it is a disadvantage.\n>\n> If you have to implement custom toaster to implement custom compression\n> method, doesn't that make things more complex? You'd have to solve all\n> the issues for custom compression methods and also all issues for custom\n> toaster. Also, what if you want to just compress the column, not toast?\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nHi,>I'm not convinced that's universally true. Yes, I'm sure certain TOAST>implementations would benefit from tighter control over compression, but>does that imply compression and toast are redundant? I doubt that,>because we compress non-toasted types too, for example. And layering has>a value too, as makes it easier to replace the pieces.Not exactly. It is a mean to control TOAST itself without changing the core each time you want to change Toast strategy or method. Compression is just an example. And no Toasters are available without the patch proposed, there is the one and only.>Perhaps. My main point is that we should not be making too many radical>changes at once - it makes it much harder to actually get anything done.>So yeah, doing TOAST through IOT might be interesting, but I'd leave>that for a separate patch.That's why 4 distinct patches with incremental changes were proposed - 1) just new Toaster API with some necessary core changes required by the API;2) default toaster routed via new API (but all it's functions are not affected and dummy toaster extension as an example);3) 1+2+some refactoring and versioning;4) extension module for bytea columns.Toast through IOT is a topic for discussion but does not seem to give a major advantage over existing storage method, according to tests.>It seems better to prevent such incompatible combinations and restrict>each toaster to just compatible data types, and the mapping table>(linking toaster and data types) seems a way to do that.To handle this case a validate function (toastervalidate_function) is proposed in the TsrRoutine structure.>If you have to implement custom toaster to implement custom compression>method, doesn't that make things more complex? You'd have to solve all>the issues for custom compression methods and also all issues for custom>toaster. Also, what if you want to just compress the column, not toast?Default compression is restricted to 2 compression methods, all other means require extensions. Also, the name Toaster is a little bit misleading becauseit intends that data is being sliced, but it is not always the case, to be toasteda piece of bread must not necessarily be sliced.Regards,--Nikita MalakhovPostgres Professionalhttps://postgrespro.ru/On Tue, Jan 18, 2022 at 7:06 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 1/18/22 15:56, Teodor Sigaev wrote:\n> Hi!\n> \n>> Maybe doing that kind of compression in TOAST is somehow simpler, but\n>> I don't see it.\n> Seems, in ideal world, compression should be inside toaster.\n> \n\nI'm not convinced that's universally true. Yes, I'm sure certain TOAST\nimplementations would benefit from tighter control over compression, but\ndoes that imply compression and toast are redundant? I doubt that,\nbecause we compress non-toasted types too, for example. And layering has\na value too, as makes it easier to replace the pieces.\n\n>>\n>>> 2 Current toast storage stores chunks in heap accesses method and to\n>>> provide fast access by toast id it makes an index. Ideas:\n>>>    - store chunks directly in btree tree, pgsql's btree already has an\n>>>      INCLUDE columns, so, chunks and visibility data will be stored only\n>>>      in leaf pages. Obviously it reduces number of disk's access for\n>>>      \"untoasting\".\n>>>    - use another access method for chunk storage\n>>>\n>>\n>> Maybe, but that probably requires more thought - e.g. btree requires\n>> the values to be less than 1/3 page, so I wonder how would that play\n>> with toasting of values.\n> That's ok, because chunk size is 2000 bytes right now and its could be\n> saved.\n>>\n\nPerhaps. My main point is that we should not be making too many radical\nchanges at once - it makes it much harder to actually get anything done.\nSo yeah, doing TOAST through IOT might be interesting, but I'd leave\nthat for a separate patch.\n\n> \n>> Seems you'd need a mapping table, to allow M:N mapping between types\n>> and toasters, linking it to all \"compatible\" types. It's not clear to\n>> me how would this work with custom data types, domains etc.\n> If toaster will look into internal structure then it should know type's\n> binary format. So, new custom types have a little chance to work with\n> old custom toaster. Default toaster works with any types.\n\nThe question is what happens when you combine data type with a toaster\nthat is not designed for that type. I mean, imagine you have a JSONB\ntoaster and you set it for a bytea column. Naive implementation will\njust crash, because it'll try to process bytea as if it was JSONB.\n\nIt seems better to prevent such incompatible combinations and restrict\neach toaster to just compatible data types, and the mapping table\n(linking toaster and data types) seems a way to do that.\n\nHowever, it seems toasters are either generic (agnostic to data types,\ntreating everything as bytea) or specialized. I doubt any specialized\ntoaster can reasonably support multiple data types, so maybe each\ntoaster can have just one \"compatible type\" OID. If it's invalid, it'd\nbe \"generic\" and otherwise it's useful for that type and types derived\nfrom it (e.g. domains).\n\nSo you'd have the toaster OID in two places:\n\npg_type.toaster_oid      - default toaster for the type\npg_attribute.toaster_oid - current toaster for this column\n\nand then you'd have\n\npg_toaster.typid - type this toaster handles (or InvalidOid for generic)\n\n\n>>\n>> Also, what happens to existing values when you change the toaster?\n>> What if the toasters don't use the same access method to store the\n>> chunks (heap vs. btree)? And so on.\n> \n> vatatt_custom contains an oid of toaster and toaster is not allowed to\n> delete (at least, in suggested patches). So, if column's toaster has\n> been changed then old values will be detoasted  by toaster pointed in\n> varatt_custom structure, not in column definition. This is very similar\n> to storage attribute works: we we alter storage attribute only new\n> values will be stored with pointed storage type.\n> \n\nIIRC we do this for compression methods, right?\n\n>>\n>>> More thought:\n>>> Now postgres has two options for column: storage and compression and\n>>> now we add toaster. For me it seems too redundantly. Seems, storage\n>>> should be binary value: inplace (plain as now) and toastable. All\n>>> other variation such as toast limit, compression enabling,\n>>> compression kind should be an per-column option for toaster (that's\n>>> why we suggest valid toaster oid for any column with\n>>> varlena/toastable datatype). It looks like a good abstraction but we\n>>> will have a problem with backward compatibility and I'm afraid I\n>>> can't implement it very fast.\n>>>\n>>\n>> So you suggest we move all of this to toaster? I'd say -1 to that,\n>> because it makes it much harder to e.g. add custom compression method,\n>> etc.\n> Hmm, I suggested to leave only toaster at upper level. Compression kind\n> could be chosen in toaster's options (not implemented yet) or even make\n> an API interface to compression to make it configurable. Right now,\n> module developer could not implement a module with new compression\n> method and it is a disadvantage.\n\nIf you have to implement custom toaster to implement custom compression\nmethod, doesn't that make things more complex? You'd have to solve all\nthe issues for custom compression methods and also all issues for custom\ntoaster. Also, what if you want to just compress the column, not toast?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 19 Jan 2022 17:25:19 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Thu, Dec 30, 2021 at 11:40 AM Teodor Sigaev <teodor@sigaev.ru> wrote:\n> We are working on custom toaster for JSONB [1], because current TOAST is\n> universal for any data type and because of that it has some disadvantages:\n> - \"one toast fits all\" may be not the best solution for particular\n> type or/and use cases\n> - it doesn't know the internal structure of data type, so it cannot\n> choose an optimal toast strategy\n> - it can't share common parts between different rows and even\n> versions of rows\n\nI agree ... but I'm also worried about what happens when we have\nmultiple table AMs. One can imagine a new table AM that is\nspecifically optimized for TOAST which can be used with an existing\nheap table. One can imagine a new table AM for the main table that\nwants to use something different for TOAST. So, I don't think it's\nright to imagine that the choice of TOASTer depends solely on the\ncolumn data type. I'm not really sure how this should work exactly ...\nbut it needs careful thought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jan 2022 10:59:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nThe patch provides assigning toaster to a data column separately. Assigning\nto a data type is considered worthy\nbut is also a topic for further discussion and is not included in patch.\nWe've been thinking how to integrate AMs and custom toasters, so any\nthoughts are welcome.\n\nRegards,\n\nOn Thu, Jan 20, 2022 at 7:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Dec 30, 2021 at 11:40 AM Teodor Sigaev <teodor@sigaev.ru> wrote:\n> > We are working on custom toaster for JSONB [1], because current TOAST is\n> > universal for any data type and because of that it has some\n> disadvantages:\n> > - \"one toast fits all\" may be not the best solution for particular\n> > type or/and use cases\n> > - it doesn't know the internal structure of data type, so it cannot\n> > choose an optimal toast strategy\n> > - it can't share common parts between different rows and even\n> > versions of rows\n>\n> I agree ... but I'm also worried about what happens when we have\n> multiple table AMs. One can imagine a new table AM that is\n> specifically optimized for TOAST which can be used with an existing\n> heap table. One can imagine a new table AM for the main table that\n> wants to use something different for TOAST. So, I don't think it's\n> right to imagine that the choice of TOASTer depends solely on the\n> column data type. I'm not really sure how this should work exactly ...\n> but it needs careful thought.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,The patch provides assigning toaster to a data column separately. Assigning to a data type is considered worthybut is also a topic for further discussion and is not included in patch.We've been thinking how to integrate AMs and custom toasters, so any thoughts are welcome.Regards,On Thu, Jan 20, 2022 at 7:00 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Dec 30, 2021 at 11:40 AM Teodor Sigaev <teodor@sigaev.ru> wrote:\n> We are working on custom toaster for JSONB [1], because current TOAST is\n> universal for any data type and because of that it has some disadvantages:\n>     - \"one toast fits all\"  may be not the best solution for particular\n>       type or/and use cases\n>     - it doesn't know the internal structure of data type, so it  cannot\n>       choose an optimal toast strategy\n>     - it can't  share common parts between different rows and even\n>       versions of rows\n\nI agree ... but I'm also worried about what happens when we have\nmultiple table AMs. One can imagine a new table AM that is\nspecifically optimized for TOAST which can be used with an existing\nheap table. One can imagine a new table AM for the main table that\nwants to use something different for TOAST. So, I don't think it's\nright to imagine that the choice of TOASTer depends solely on the\ncolumn data type. I'm not really sure how this should work exactly ...\nbut it needs careful thought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n-- Regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 20 Jan 2022 21:24:56 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "> I agree ... but I'm also worried about what happens when we have\n> multiple table AMs. One can imagine a new table AM that is\n> specifically optimized for TOAST which can be used with an existing\n> heap table. One can imagine a new table AM for the main table that\n> wants to use something different for TOAST. So, I don't think it's\n> right to imagine that the choice of TOASTer depends solely on the\n> column data type. I'm not really sure how this should work exactly ...\n> but it needs careful thought.\n\nRight. that's why we propose a validate method (may be, it's a wrong \nname, but I don't known better one) which accepts several arguments, one \nof which is table AM oid. If that method returns false then toaster \nisn't useful with current TAM, storage or/and compression kinds, etc.\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/\n\n\n", "msg_date": "Wed, 2 Feb 2022 10:34:49 +0300", "msg_from": "Teodor Sigaev <teodor@sigaev.ru>", "msg_from_op": true, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Hackers,\n\nIn addition to original patch set for Pluggable Toaster, we have two more\npatches\n(actually, small, but important fixes), authored by Nikita Glukhov:\n\n1) 0001-Fix-toast_tuple_externalize.patch\nThis patch fixes freeing memory in case of new toasted value is the same as\nold one,\nthis seems incorrect, and in this case the function just returns instead of\nfreeing old value;\n\n2) 0002-Fix-alignment-of-custom-TOAST-pointers.patch\nThis patch adds data alignment for new varatt_custom data structure in\nbuilding tuples,\nsince varatt_custom must be aligned for custom toasters (in particular,\nthis fix is very\nimportant to JSONb Toaster).\n\nThese patches must be applied on top of the latter\n4_bytea_appendable_toaster_v1.patch.\nPlease consider them in reviewing Pluggable Toaster.\n\nRegards.\n\n\nOn Wed, Feb 2, 2022 at 10:35 AM Teodor Sigaev <teodor@sigaev.ru> wrote:\n\n> > I agree ... but I'm also worried about what happens when we have\n> > multiple table AMs. One can imagine a new table AM that is\n> > specifically optimized for TOAST which can be used with an existing\n> > heap table. One can imagine a new table AM for the main table that\n> > wants to use something different for TOAST. So, I don't think it's\n> > right to imagine that the choice of TOASTer depends solely on the\n> > column data type. I'm not really sure how this should work exactly ...\n> > but it needs careful thought.\n>\n> Right. that's why we propose a validate method (may be, it's a wrong\n> name, but I don't known better one) which accepts several arguments, one\n> of which is table AM oid. If that method returns false then toaster\n> isn't useful with current TAM, storage or/and compression kinds, etc.\n>\n> --\n> Teodor Sigaev E-mail: teodor@sigaev.ru\n> WWW: http://www.sigaev.ru/\n>\n>\n>\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Thu, 10 Mar 2022 11:47:32 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Hackers,\nBecause of 3 months have passed since Pluggable Toaster presentation and a\nlot of\ncommits were pushed into v15 master - we would like to re-introduce\nthis patch\nrebased onto actual master. Last commit being used -\ncommit 641f3dffcdf1c7378cfb94c98b6642793181d6db (origin/master)\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Mar 11 13:47:26 2022 -0500\n\nUpdated patch consists of 4 patch files, next version (v2) of original\npatch files\n(please check original commit message from 30 Dec 2020):\n1) 1_toaster_interface_v2.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_interface\nIntroduces syntax for storage and formal toaster API.\n\n2) 2_toaster_default_v2.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_default\nBuilt-in toaster implemented (with some refactoring) uisng toaster API\nas generic (or default) toaster.\n\n3) 3_toaster_snapshot_v2.patch.gz\nhttps://github.com/postgrespro/postgres/tree/toaster_snapshot\nThe patch implements technology to distinguish row's versions in toasted\nvalues to share common parts of toasted values between different\nversions of rows\n\n4) 4_bytea_appendable_toaster_v2.patch.gz\nhttps://github.com/postgrespro/postgres/tree/bytea_appendable_toaster\nContrib module implements toaster for non-compressed bytea columns,\nwhich allows fast appending to existing bytea value.\n\nThese patches also include 2 minor fixes made after commit fest presentation\n1) Fix for freeing memory in case of new toasted value is the same as old\none,\nthis seems incorrect, and in this case the function just returns instead of\nfreeing old value;\n\n2) Fix of data alignment for new varatt_custom data structure in building\ntuples,\nsince varatt_custom must be aligned for custom toasters (in particular,\nthis fix is very\nimportant to JSONb Toaster).\n\nThanks!\n\nOn Wed, Feb 2, 2022 at 10:35 AM Teodor Sigaev <teodor@sigaev.ru> wrote:\n\n> > I agree ... but I'm also worried about what happens when we have\n> > multiple table AMs. One can imagine a new table AM that is\n> > specifically optimized for TOAST which can be used with an existing\n> > heap table. One can imagine a new table AM for the main table that\n> > wants to use something different for TOAST. So, I don't think it's\n> > right to imagine that the choice of TOASTer depends solely on the\n> > column data type. I'm not really sure how this should work exactly ...\n> > but it needs careful thought.\n>\n> Right. that's why we propose a validate method (may be, it's a wrong\n> name, but I don't known better one) which accepts several arguments, one\n> of which is table AM oid. If that method returns false then toaster\n> isn't useful with current TAM, storage or/and compression kinds, etc.\n>\n> --\n> Teodor Sigaev E-mail: teodor@sigaev.ru\n> WWW: http://www.sigaev.ru/\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 22 Mar 2022 02:31:21 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nOn 2022-03-22 02:31:21 +0300, Nikita Malakhov wrote:\n> Hi Hackers,\n> Because of 3 months have passed since Pluggable Toaster presentation and a\n> lot of\n> commits were pushed into v15 master - we would like to re-introduce\n> this patch\n> rebased onto actual master. Last commit being used -\n> commit 641f3dffcdf1c7378cfb94c98b6642793181d6db (origin/master)\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Fri Mar 11 13:47:26 2022 -0500\n\nIt currently fails to apply: http://cfbot.cputube.org/patch_37_3490.log\n\nGiven the size of the patch, and the degree of review it has gotten so far, it\nseems not realistically a fit for 15, but is marked as such.\n\nThink it should be moved to the next CF. Marked as waiting-on-author for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:51:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nIn the previous email I attached patches that were not sequential, all 4\nfiles contained complete independent\npatch to apply to master. Sorry, I re-created patch files to apply them in\nsequence as in was meant in original\nmail by Teodor Sigaev.\nPlease check.\nThanks!\n\nOn Tue, Mar 22, 2022 at 3:51 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-03-22 02:31:21 +0300, Nikita Malakhov wrote:\n> > Hi Hackers,\n> > Because of 3 months have passed since Pluggable Toaster presentation and\n> a\n> > lot of\n> > commits were pushed into v15 master - we would like to re-introduce\n> > this patch\n> > rebased onto actual master. Last commit being used -\n> > commit 641f3dffcdf1c7378cfb94c98b6642793181d6db (origin/master)\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: Fri Mar 11 13:47:26 2022 -0500\n>\n> It currently fails to apply: http://cfbot.cputube.org/patch_37_3490.log\n>\n> Given the size of the patch, and the degree of review it has gotten so\n> far, it\n> seems not realistically a fit for 15, but is marked as such.\n>\n> Think it should be moved to the next CF. Marked as waiting-on-author for\n> now.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 22 Mar 2022 15:18:55 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "It looks like it's still not actually building on some compilers. I\nsee a bunch of warnings as well as an error:\n\n[03:53:24.660] dummy_toaster.c:97:2: error: void function\n'dummyDelete' should not return a value [-Wreturn-type]\n\nAlso the \"publication\" regression test needs to be adjusted as it\nincludes \\d+ output which has changed to include the toaster.\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:14:36 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nHave found code corrupted by merge went unnoticed in dummy_toaster.c.\nPlease look at the attached patches. They must be applied sequentially, one\nafter another,\nfrom the 1st one. I haven't seed any warnings during compilation (with gcc\non Ubuntu 20),\nand check-world goes with no errors.\n\nOn Thu, Mar 31, 2022 at 11:15 PM Greg Stark <stark@mit.edu> wrote:\n\n> It looks like it's still not actually building on some compilers. I\n> see a bunch of warnings as well as an error:\n>\n> [03:53:24.660] dummy_toaster.c:97:2: error: void function\n> 'dummyDelete' should not return a value [-Wreturn-type]\n>\n> Also the \"publication\" regression test needs to be adjusted as it\n> includes \\d+ output which has changed to include the toaster.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Fri, 1 Apr 2022 17:11:16 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hm. It compiles but it's failing regression tests:\n\ndiff -U3 /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n/tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n--- /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n2022-04-02 16:02:47.874360253 +0000\n+++ /tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n2022-04-02 16:07:20.878047769 +0000\n@@ -20,186 +20,7 @@\n....\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+connection to server was lost\nI think this will require some real debugging, so I'm marking this\nWaiting on Author.\n\n\n", "msg_date": "Sat, 2 Apr 2022 21:20:36 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi, \n\nOn April 2, 2022 6:20:36 PM PDT, Greg Stark <stark@mit.edu> wrote:\n>Hm. It compiles but it's failing regression tests:\n>\n>diff -U3 /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n>/tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n>--- /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n>2022-04-02 16:02:47.874360253 +0000\n>+++ /tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n>2022-04-02 16:07:20.878047769 +0000\n>@@ -20,186 +20,7 @@\n>....\n>+server closed the connection unexpectedly\n>+ This probably means the server terminated abnormally\n>+ before or while processing the request.\n>+connection to server was lost\n>I think this will require some real debugging, so I'm marking this\n>Waiting on Author.\n\nYes, dumps core (just like in several previous runs):\n\nhttps://cirrus-ci.com/task/4710272324599808?logs=cores#L44\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 02 Apr 2022 19:06:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nI'm checking. It seems that I've missed something while rebasing, we have\nhad all tests clean before.\n\nOn Sun, Apr 3, 2022 at 5:06 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On April 2, 2022 6:20:36 PM PDT, Greg Stark <stark@mit.edu> wrote:\n> >Hm. It compiles but it's failing regression tests:\n> >\n> >diff -U3\n> /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n> >/tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n> >--- /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n> >2022-04-02 16:02:47.874360253 +0000\n> >+++ /tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n> >2022-04-02 16:07:20.878047769 +0000\n> >@@ -20,186 +20,7 @@\n> >....\n> >+server closed the connection unexpectedly\n> >+ This probably means the server terminated abnormally\n> >+ before or while processing the request.\n> >+connection to server was lost\n> >I think this will require some real debugging, so I'm marking this\n> >Waiting on Author.\n>\n> Yes, dumps core (just like in several previous runs):\n>\n> https://cirrus-ci.com/task/4710272324599808?logs=cores#L44\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,I'm checking. It seems that I've missed something while rebasing, we have had all tests clean before.On Sun, Apr 3, 2022 at 5:06 AM Andres Freund <andres@anarazel.de> wrote:Hi, \n\nOn April 2, 2022 6:20:36 PM PDT, Greg Stark <stark@mit.edu> wrote:\n>Hm. It compiles but it's failing regression tests:\n>\n>diff -U3 /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n>/tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n>--- /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n>2022-04-02 16:02:47.874360253 +0000\n>+++ /tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n>2022-04-02 16:07:20.878047769 +0000\n>@@ -20,186 +20,7 @@\n>....\n>+server closed the connection unexpectedly\n>+ This probably means the server terminated abnormally\n>+ before or while processing the request.\n>+connection to server was lost\n>I think this will require some real debugging, so I'm marking this\n>Waiting on Author.\n\nYes, dumps core (just like in several previous runs):\n\nhttps://cirrus-ci.com/task/4710272324599808?logs=cores#L44\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sun, 3 Apr 2022 16:15:19 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nI'm really sorry. Messed up some code while merging rebased branches with\nprevious (v1)\npatches issued in December and haven't noticed that seg fault because of\ncorrupted code\nwhile running check-world.\nI've fixed messed code in Dummy Toaster package and checked twice - all's\ncorrect now,\npatches are applied correctly and tests are clean.\nThank you for pointing out the error and for your patience!\n\nOn Sun, Apr 3, 2022 at 5:06 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On April 2, 2022 6:20:36 PM PDT, Greg Stark <stark@mit.edu> wrote:\n> >Hm. It compiles but it's failing regression tests:\n> >\n> >diff -U3\n> /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n> >/tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n> >--- /tmp/cirrus-ci-build/contrib/dummy_toaster/expected/dummy_toaster.out\n> >2022-04-02 16:02:47.874360253 +0000\n> >+++ /tmp/cirrus-ci-build/contrib/dummy_toaster/results/dummy_toaster.out\n> >2022-04-02 16:07:20.878047769 +0000\n> >@@ -20,186 +20,7 @@\n> >....\n> >+server closed the connection unexpectedly\n> >+ This probably means the server terminated abnormally\n> >+ before or while processing the request.\n> >+connection to server was lost\n> >I think this will require some real debugging, so I'm marking this\n> >Waiting on Author.\n>\n> Yes, dumps core (just like in several previous runs):\n>\n> https://cirrus-ci.com/task/4710272324599808?logs=cores#L44\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Mon, 4 Apr 2022 19:12:59 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Mon, Apr 4, 2022 at 12:13 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> I'm really sorry. Messed up some code while merging rebased branches with previous (v1)\n> patches issued in December and haven't noticed that seg fault because of corrupted code\n> while running check-world.\n> I've fixed messed code in Dummy Toaster package and checked twice - all's correct now,\n> patches are applied correctly and tests are clean.\n> Thank you for pointing out the error and for your patience!\n\nHi,\n\nThis patch set doesn't seem anywhere close to committable to me. For example:\n\n- It apparently adds a new command called CREATE TOASTER but that\ncommand doesn't seem to be documented anywhere.\n\n- contrib/dummy_toaster is added in patch 1 with a no implementation\nand code comments that say \"Bloom index utilities\" and then those\ncomments are fixed and an implementation is added in later patches.\n\n- What is supposedly patch 1 is actually 32 patch files concatenated\ntogether. It doesn't apply properly either with 'patch' or 'git am' so\nI don't understand what we would even do with this.\n\n- None of these patches have appropriate commit messages.\n\n- Many of these patches add 'XXX' comments or errors or other\nobviously unfinished bits of code. Some of this may be cleaned up by\nlater patches but it's hard to tell because right now one can't even\napply the patch set properly. Even if that were possible, it's the job\nof the person submitting the patch to organize the patch into\nindependent, separately committable chunks that are self-contained and\nhave good comments and good commit messages for each one.\n\nI don't think based on the status of this patch set that it's even\npossible to provide useful feedback on the design at this point, never\nmind getting anything committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 12:39:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Thanks for the review Robert. I think that gives some pretty\nactionable advice on how to improve the patch and it doesn't seem\nlikely to get much more in this cycle.\n\nI'll mark the patch Returned with Feedback. Hope to see it come back\nwith improvements in the next release.\n\n\n", "msg_date": "Mon, 4 Apr 2022 15:32:45 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nThank you very much for your review, I'd like to get it much earlier. I'm\ncurrently\nworking on cleaning up code, and would try to comment as much as possible.\nThis patch set is really a large set of functionality, it was divided into\n4 logically complete\nparts, but anyway these parts contain a lot of changes themselves.\n- Yes, you're right, new syntax is added. I'm also working on the\ndocumentation part for it;\n- Patches actually consist of a lot of minor commits. As I see we have to\nsquash them\nto provide parts as 2-3 main commits without any unnecessary garbage;\n- Is 'git apply' not a valid way to apply such patches?\n\nWe'll try to straighten these issues out asap.\n\nThank you!\n\nOn Mon, Apr 4, 2022 at 7:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 4, 2022 at 12:13 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> > I'm really sorry. Messed up some code while merging rebased branches\n> with previous (v1)\n> > patches issued in December and haven't noticed that seg fault because of\n> corrupted code\n> > while running check-world.\n> > I've fixed messed code in Dummy Toaster package and checked twice -\n> all's correct now,\n> > patches are applied correctly and tests are clean.\n> > Thank you for pointing out the error and for your patience!\n>\n> Hi,\n>\n> This patch set doesn't seem anywhere close to committable to me. For\n> example:\n>\n> - It apparently adds a new command called CREATE TOASTER but that\n> command doesn't seem to be documented anywhere.\n>\n> - contrib/dummy_toaster is added in patch 1 with a no implementation\n> and code comments that say \"Bloom index utilities\" and then those\n> comments are fixed and an implementation is added in later patches.\n>\n> - What is supposedly patch 1 is actually 32 patch files concatenated\n> together. It doesn't apply properly either with 'patch' or 'git am' so\n> I don't understand what we would even do with this.\n>\n> - None of these patches have appropriate commit messages.\n>\n> - Many of these patches add 'XXX' comments or errors or other\n> obviously unfinished bits of code. Some of this may be cleaned up by\n> later patches but it's hard to tell because right now one can't even\n> apply the patch set properly. Even if that were possible, it's the job\n> of the person submitting the patch to organize the patch into\n> independent, separately committable chunks that are self-contained and\n> have good comments and good commit messages for each one.\n>\n> I don't think based on the status of this patch set that it's even\n> possible to provide useful feedback on the design at this point, never\n> mind getting anything committed.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Thank you very much for your review, I'd like to get it much earlier. I'm currently working on cleaning up code, and would try to comment as much as possible. This patch set is really a large set of functionality, it was divided into 4 logically complete parts, but anyway these parts contain a lot of changes themselves.- Yes, you're right, new syntax is added. I'm also working on the documentation part for it;- Patches actually consist of a lot of minor commits. As I see we have to squash themto provide parts as 2-3 main commits without any unnecessary garbage;- Is 'git apply' not a valid way to apply such patches?We'll try to straighten these issues out asap.Thank you!On Mon, Apr 4, 2022 at 7:40 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Apr 4, 2022 at 12:13 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> I'm really sorry. Messed up some code while merging rebased branches with previous (v1)\n> patches issued in December and haven't noticed that seg fault because of corrupted code\n> while running check-world.\n> I've fixed messed code in Dummy Toaster package and checked twice - all's correct now,\n> patches are applied correctly and tests are clean.\n> Thank you for pointing out the error and for your patience!\n\nHi,\n\nThis patch set doesn't seem anywhere close to committable to me. For example:\n\n- It apparently adds a new command called CREATE TOASTER but that\ncommand doesn't seem to be documented anywhere.\n\n- contrib/dummy_toaster is added in patch 1 with a no implementation\nand code comments that say \"Bloom index utilities\" and then those\ncomments are fixed and an implementation is added in later patches.\n\n- What is supposedly patch 1 is actually 32 patch files concatenated\ntogether. It doesn't apply properly either with 'patch' or 'git am' so\nI don't understand what we would even do with this.\n\n- None of these patches have appropriate commit messages.\n\n- Many of these patches add 'XXX' comments or errors or other\nobviously unfinished bits of code. Some of this may be cleaned up by\nlater patches but it's hard to tell because right now one can't even\napply the patch set properly. Even if that were possible, it's the job\nof the person submitting the patch to organize the patch into\nindependent, separately committable chunks that are self-contained and\nhave good comments and good commit messages for each one.\n\nI don't think based on the status of this patch set that it's even\npossible to provide useful feedback on the design at this point, never\nmind getting anything committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 4 Apr 2022 23:05:16 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Mon, Apr 4, 2022 at 4:05 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> - Is 'git apply' not a valid way to apply such patches?\n\nI have found that it never works. This case is no exception:\n\n[rhaas pgsql]$ git apply ~/Downloads/1_toaster_interface_v4.patch\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:253: trailing whitespace.\ntoasterapi.o\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:1276: trailing whitespace.\n{\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:1294: trailing whitespace.\n * CREATE TOASTER name HANDLER handler_name\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:2261: trailing whitespace.\n * va_toasterdata could contain varatt_external structure for old Toast\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:3047: trailing whitespace.\nSELECT attnum, attname, atttypid, attstorage, tsrname\nerror: patch failed: src/backend/commands/tablecmds.c:42\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:943\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:973\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:44\nerror: src/backend/commands/tablecmds.c: patch does not apply\n\nI would really encourage you to use 'git format-patch' to generate a\nstack of patches. But there is no point in reposting 30+ patches that\nhaven't been properly refactored into separate chunks. You need to\nmaintain a branch, periodically rebased over master, with some\nprobably-small number of patches on it, each of which is a logically\nindependent patch with its own commit message, its own clear purpose,\netc. And then generate patches to post from there using 'git\nformat-patch'. Look into using 'git rebase -i --autosquash' and 'git\ncommit --fixup' to maintain the branch, if you're not already familiar\nwith those things.\n\nAlso, it is a really good idea when you post the patch set to include\nin the email a clear description of the overall purpose of the patch\nset and what each patch does toward that goal. e.g. \"The overall goal\nof this patch set is to support faster-than-light travel. Currently,\nPostgreSQL does not know anything about the speed of light, so 0001\nadds some code for speed-of-light detection. Building on this, 0002\nadds general support for disabling physical laws of the universe.\nThen, 0003 makes use of this support to disable specifically the speed\nof light.\" Perhaps you want a little more text than that for each\npatch, depending on the situation, but this gives you the idea, I\nhope.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 16:17:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nThanks for advices.\nWe have 4 branches, for each patch provided, you can check them out -\n(come copy-paste from the very fist email, where the patches were proposed)\n1) 1_toaster_interface\nhttps://github.com/postgrespro/postgres/tree/toaster_interface\nIntroduces syntax for storage and formal toaster API. Adds column\natttoaster to pg_attribute, by design this column should not be equal to\ninvalid oid for any toastable datatype, ie it must have correct oid for\nany type (not column) with non-plain storage. Since toaster may support\nonly particular datatype, core should check correctness of toaster set\nby toaster validate method. New commands could be found in\nsrc/test/regress/sql/toaster.sql. Also includes modification of pg_dump.\n\n2) 2_toaster_default\nhttps://github.com/postgrespro/postgres/tree/toaster_default\nBuilt-in toaster implemented (with some refactoring) using toaster API\nas generic (or default) toaster. dummy_toaster here is a minimal\nworkable example, it saves value directly in toast pointer and fails if\nvalue is greater than 1kb.\n\n3) 3_toaster_snapshot\nhttps://github.com/postgrespro/postgres/tree/toaster_snapshot\nThe patch implements technology to distinguish row's versions in toasted\nvalues to share common parts of toasted values between different\nversions of rows\n\n4) 4_bytea_appendable_toaster\nhttps://github.com/postgrespro/postgres/tree/bytea_appendable_toaster\nContrib module implements toaster for non-compressed bytea columns,\nwhich allows fast appending to existing bytea value. Appended tail\nstored directly in toaster pointer, if there is enough space to do it.\n\nWorking on refactoring according to your recommendations.\nThank you!\n\nOn Mon, Apr 4, 2022 at 11:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 4, 2022 at 4:05 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> > - Is 'git apply' not a valid way to apply such patches?\n>\n> I have found that it never works. This case is no exception:\n>\n> [rhaas pgsql]$ git apply ~/Downloads/1_toaster_interface_v4.patch\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:253: trailing\n> whitespace.\n> toasterapi.o\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1276: trailing\n> whitespace.\n> {\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1294: trailing\n> whitespace.\n> * CREATE TOASTER name HANDLER handler_name\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:2261: trailing\n> whitespace.\n> * va_toasterdata could contain varatt_external structure for old Toast\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:3047: trailing\n> whitespace.\n> SELECT attnum, attname, atttypid, attstorage, tsrname\n> error: patch failed: src/backend/commands/tablecmds.c:42\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:943\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:973\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:44\n> error: src/backend/commands/tablecmds.c: patch does not apply\n>\n> I would really encourage you to use 'git format-patch' to generate a\n> stack of patches. But there is no point in reposting 30+ patches that\n> haven't been properly refactored into separate chunks. You need to\n> maintain a branch, periodically rebased over master, with some\n> probably-small number of patches on it, each of which is a logically\n> independent patch with its own commit message, its own clear purpose,\n> etc. And then generate patches to post from there using 'git\n> format-patch'. Look into using 'git rebase -i --autosquash' and 'git\n> commit --fixup' to maintain the branch, if you're not already familiar\n> with those things.\n>\n> Also, it is a really good idea when you post the patch set to include\n> in the email a clear description of the overall purpose of the patch\n> set and what each patch does toward that goal. e.g. \"The overall goal\n> of this patch set is to support faster-than-light travel. Currently,\n> PostgreSQL does not know anything about the speed of light, so 0001\n> adds some code for speed-of-light detection. Building on this, 0002\n> adds general support for disabling physical laws of the universe.\n> Then, 0003 makes use of this support to disable specifically the speed\n> of light.\" Perhaps you want a little more text than that for each\n> patch, depending on the situation, but this gives you the idea, I\n> hope.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Thanks for advices.We have 4 branches, for each patch provided, you can check them out -(come copy-paste from the very fist email, where the patches were proposed)1) 1_toaster_interfacehttps://github.com/postgrespro/postgres/tree/toaster_interfaceIntroduces syntax for storage and formal toaster API. Adds columnatttoaster to pg_attribute, by design this column should not be equal toinvalid oid for any toastable datatype, ie it must have correct oid forany type (not column) with non-plain storage. Since  toaster may supportonly particular datatype, core should check correctness of toaster setby toaster validate method. New commands could be found insrc/test/regress/sql/toaster.sql. Also includes modification of pg_dump.2) 2_toaster_defaulthttps://github.com/postgrespro/postgres/tree/toaster_defaultBuilt-in toaster implemented (with some refactoring)  using toaster APIas generic (or default) toaster.  dummy_toaster here is a minimalworkable example, it saves value directly in toast pointer and fails ifvalue is greater than 1kb.3) 3_toaster_snapshothttps://github.com/postgrespro/postgres/tree/toaster_snapshotThe patch implements technology to distinguish row's versions in toastedvalues to share common parts of toasted values between differentversions of rows4) 4_bytea_appendable_toasterhttps://github.com/postgrespro/postgres/tree/bytea_appendable_toasterContrib module implements toaster for non-compressed bytea columns,which allows fast appending to existing bytea value. Appended tailstored directly in toaster pointer, if there is enough space to do it.Working on refactoring according to your recommendations.Thank you!On Mon, Apr 4, 2022 at 11:18 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Apr 4, 2022 at 4:05 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> - Is 'git apply' not a valid way to apply such patches?\n\nI have found that it never works. This case is no exception:\n\n[rhaas pgsql]$ git apply ~/Downloads/1_toaster_interface_v4.patch\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:253: trailing whitespace.\ntoasterapi.o\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:1276: trailing whitespace.\n{\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:1294: trailing whitespace.\n * CREATE TOASTER name HANDLER handler_name\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:2261: trailing whitespace.\n * va_toasterdata could contain varatt_external structure for old Toast\n/Users/rhaas/Downloads/1_toaster_interface_v4.patch:3047: trailing whitespace.\nSELECT attnum, attname, atttypid, attstorage, tsrname\nerror: patch failed: src/backend/commands/tablecmds.c:42\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:943\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:973\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:44\nerror: src/backend/commands/tablecmds.c: patch does not apply\n\nI would really encourage you to use 'git format-patch' to generate a\nstack of patches. But there is no point in reposting 30+ patches that\nhaven't been properly refactored into separate chunks. You need to\nmaintain a branch, periodically rebased over master, with some\nprobably-small number of patches on it, each of which is a logically\nindependent patch with its own commit message, its own clear purpose,\netc. And then generate patches to post from there using 'git\nformat-patch'. Look into using 'git rebase -i --autosquash' and 'git\ncommit --fixup' to maintain the branch, if you're not already familiar\nwith those things.\n\nAlso, it is a really good idea when you post the patch set to include\nin the email a clear description of the overall purpose of the patch\nset and what each patch does toward that goal. e.g. \"The overall goal\nof this patch set is to support faster-than-light travel. Currently,\nPostgreSQL does not know anything about the speed of light, so 0001\nadds some code for speed-of-light detection. Building on this, 0002\nadds general support for disabling physical laws of the universe.\nThen, 0003 makes use of this support to disable specifically the speed\nof light.\" Perhaps you want a little more text than that for each\npatch, depending on the situation, but this gives you the idea, I\nhope.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Tue, 5 Apr 2022 10:58:59 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nI reworked previous patch set according to recommendations. Patches\nare generated by format-patch and applied by git am. Patches are based on\nmaster from 03.11. Also, now we've got clean branch with incremental\ncommits\nwhich could be easily rebased onto a fresh master.\n\nCurrently, there are 8 patches:\n\n1) 0001_create_table_storage_v3.patch - SET STORAGE option for CREATE\nTABLE command by Teodor Sigaev which is required by all the following\nfunctionality;\n\n2) 0002_toaster_interface_v6.patch - Toaster API (SQL syntax for toasters +\nAPI)\nwith Dummy toaster as an example of how this API should be used, but with\ndefault\ntoaster left 'as-is';\n\n3) 0003_toaster_default_v5.patch - default (regular) toaster is implemented\nvia new API;\n\n4) 0004_toaster_snapshot_v5.patch - refactoring of default toaster and\nsupport\nof versioned toasted rows;\n\n5) 0005_bytea_appendable_toaster_v5.patch - bytea toaster by Nikita Glukhov\nCustom toaster for bytea data with support of appending (instead of\nrewriting)\nstored data;\n\n6) 0006_toasterapi_docs_v1.patch - brief documentation on Toaster API in Pg\ndocs;\n\n7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom toast\npointer's\nalignment required by bytea toaster by Nikita Glukhov;\n\n8) 0008_fix_toast_tuple_externalize.patch - fixes toast_tuple_externalize\nfunction\nnot to call toast if old data is the same as new one.\n\nI would be grateful for feedback on the reworked patch set.\n\nOn Mon, Apr 4, 2022 at 11:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 4, 2022 at 4:05 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> > - Is 'git apply' not a valid way to apply such patches?\n>\n> I have found that it never works. This case is no exception:\n>\n> [rhaas pgsql]$ git apply ~/Downloads/1_toaster_interface_v4.patch\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:253: trailing\n> whitespace.\n> toasterapi.o\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1276: trailing\n> whitespace.\n> {\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1294: trailing\n> whitespace.\n> * CREATE TOASTER name HANDLER handler_name\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:2261: trailing\n> whitespace.\n> * va_toasterdata could contain varatt_external structure for old Toast\n> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:3047: trailing\n> whitespace.\n> SELECT attnum, attname, atttypid, attstorage, tsrname\n> error: patch failed: src/backend/commands/tablecmds.c:42\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:943\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:973\n> error: src/backend/commands/tablecmds.c: patch does not apply\n> error: patch failed: src/backend/commands/tablecmds.c:44\n> error: src/backend/commands/tablecmds.c: patch does not apply\n>\n> I would really encourage you to use 'git format-patch' to generate a\n> stack of patches. But there is no point in reposting 30+ patches that\n> haven't been properly refactored into separate chunks. You need to\n> maintain a branch, periodically rebased over master, with some\n> probably-small number of patches on it, each of which is a logically\n> independent patch with its own commit message, its own clear purpose,\n> etc. And then generate patches to post from there using 'git\n> format-patch'. Look into using 'git rebase -i --autosquash' and 'git\n> commit --fixup' to maintain the branch, if you're not already familiar\n> with those things.\n>\n> Also, it is a really good idea when you post the patch set to include\n> in the email a clear description of the overall purpose of the patch\n> set and what each patch does toward that goal. e.g. \"The overall goal\n> of this patch set is to support faster-than-light travel. Currently,\n> PostgreSQL does not know anything about the speed of light, so 0001\n> adds some code for speed-of-light detection. Building on this, 0002\n> adds general support for disabling physical laws of the universe.\n> Then, 0003 makes use of this support to disable specifically the speed\n> of light.\" Perhaps you want a little more text than that for each\n> patch, depending on the situation, but this gives you the idea, I\n> hope.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Wed, 13 Apr 2022 21:55:03 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nFor a pluggable toaster - in previous patch set part 7 patch file contains\ninvalid string.\nFixup (v2 file should used instead of previous) patch:\n7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom toast\npointer's\nalignment required by bytea toaster by Nikita Glukhov;\n\n\nOn Wed, Apr 13, 2022 at 9:55 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n> I reworked previous patch set according to recommendations. Patches\n> are generated by format-patch and applied by git am. Patches are based on\n> master from 03.11. Also, now we've got clean branch with incremental\n> commits\n> which could be easily rebased onto a fresh master.\n>\n> Currently, there are 8 patches:\n>\n> 1) 0001_create_table_storage_v3.patch - SET STORAGE option for CREATE\n> TABLE command by Teodor Sigaev which is required by all the following\n> functionality;\n>\n> 2) 0002_toaster_interface_v6.patch - Toaster API (SQL syntax for toasters\n> + API)\n> with Dummy toaster as an example of how this API should be used, but with\n> default\n> toaster left 'as-is';\n>\n> 3) 0003_toaster_default_v5.patch - default (regular) toaster is implemented\n> via new API;\n>\n> 4) 0004_toaster_snapshot_v5.patch - refactoring of default toaster and\n> support\n> of versioned toasted rows;\n>\n> 5) 0005_bytea_appendable_toaster_v5.patch - bytea toaster by Nikita Glukhov\n> Custom toaster for bytea data with support of appending (instead of\n> rewriting)\n> stored data;\n>\n> 6) 0006_toasterapi_docs_v1.patch - brief documentation on Toaster API in\n> Pg docs;\n>\n> 7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom toast\n> pointer's\n> alignment required by bytea toaster by Nikita Glukhov;\n>\n> 8) 0008_fix_toast_tuple_externalize.patch - fixes toast_tuple_externalize\n> function\n> not to call toast if old data is the same as new one.\n>\n> I would be grateful for feedback on the reworked patch set.\n>\n> On Mon, Apr 4, 2022 at 11:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Apr 4, 2022 at 4:05 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>> > - Is 'git apply' not a valid way to apply such patches?\n>>\n>> I have found that it never works. This case is no exception:\n>>\n>> [rhaas pgsql]$ git apply ~/Downloads/1_toaster_interface_v4.patch\n>> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:253: trailing\n>> whitespace.\n>> toasterapi.o\n>> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1276: trailing\n>> whitespace.\n>> {\n>> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:1294: trailing\n>> whitespace.\n>> * CREATE TOASTER name HANDLER handler_name\n>> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:2261: trailing\n>> whitespace.\n>> * va_toasterdata could contain varatt_external structure for old Toast\n>> /Users/rhaas/Downloads/1_toaster_interface_v4.patch:3047: trailing\n>> whitespace.\n>> SELECT attnum, attname, atttypid, attstorage, tsrname\n>> error: patch failed: src/backend/commands/tablecmds.c:42\n>> error: src/backend/commands/tablecmds.c: patch does not apply\n>> error: patch failed: src/backend/commands/tablecmds.c:943\n>> error: src/backend/commands/tablecmds.c: patch does not apply\n>> error: patch failed: src/backend/commands/tablecmds.c:973\n>> error: src/backend/commands/tablecmds.c: patch does not apply\n>> error: patch failed: src/backend/commands/tablecmds.c:44\n>> error: src/backend/commands/tablecmds.c: patch does not apply\n>>\n>> I would really encourage you to use 'git format-patch' to generate a\n>> stack of patches. But there is no point in reposting 30+ patches that\n>> haven't been properly refactored into separate chunks. You need to\n>> maintain a branch, periodically rebased over master, with some\n>> probably-small number of patches on it, each of which is a logically\n>> independent patch with its own commit message, its own clear purpose,\n>> etc. And then generate patches to post from there using 'git\n>> format-patch'. Look into using 'git rebase -i --autosquash' and 'git\n>> commit --fixup' to maintain the branch, if you're not already familiar\n>> with those things.\n>>\n>> Also, it is a really good idea when you post the patch set to include\n>> in the email a clear description of the overall purpose of the patch\n>> set and what each patch does toward that goal. e.g. \"The overall goal\n>> of this patch set is to support faster-than-light travel. Currently,\n>> PostgreSQL does not know anything about the speed of light, so 0001\n>> adds some code for speed-of-light detection. Building on this, 0002\n>> adds general support for disabling physical laws of the universe.\n>> Then, 0003 makes use of this support to disable specifically the speed\n>> of light.\" Perhaps you want a little more text than that for each\n>> patch, depending on the situation, but this gives you the idea, I\n>> hope.\n>>\n>> --\n>> Robert Haas\n>> EDB: http://www.enterprisedb.com\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Wed, 13 Apr 2022 22:58:24 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers,\n\n> For a pluggable toaster - in previous patch set part 7 patch file contains invalid string.\n> Fixup (v2 file should used instead of previous) patch:\n> 7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom toast pointer's\n> alignment required by bytea toaster by Nikita Glukhov;\n\nI finished digesting the thread and the referred presentations per\nMatthias (cc:'ed) suggestion in [1] discussion. Although the patchset\ngot a fair amount of push-back above, I prefer to stay open minded and\ninvest some of my time into this effort as a tester/reviewer during\nthe July CF. Even if the patchset will not make it entirely to the\ncore, some of its parts can be useful.\n\nUnfortunately, I didn't manage to find something that can be applied\nand tested. cfbot is currently not happy with the patchset.\n0001_create_table_storage_v3.patch doesn't apply to the current\norigin/master manually either:\n\n```\nerror: patch failed: src/backend/parser/gram.y:2318\nerror: src/backend/parser/gram.y: patch does not apply\n```\n\nAny chance we can see a rebased patchset for the July CF?\n\n[1]: https://commitfest.postgresql.org/38/3626/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 17 Jun 2022 17:33:24 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers,\nWe're currently working on rebase along other TOAST improvements, hope to\ndo it in time for July CF.\nThank you for your patience.\n\n--\nBest regards,\nNikita Malakhov\n\nOn Fri, Jun 17, 2022 at 5:33 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> > For a pluggable toaster - in previous patch set part 7 patch file\n> contains invalid string.\n> > Fixup (v2 file should used instead of previous) patch:\n> > 7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom\n> toast pointer's\n> > alignment required by bytea toaster by Nikita Glukhov;\n>\n> I finished digesting the thread and the referred presentations per\n> Matthias (cc:'ed) suggestion in [1] discussion. Although the patchset\n> got a fair amount of push-back above, I prefer to stay open minded and\n> invest some of my time into this effort as a tester/reviewer during\n> the July CF. Even if the patchset will not make it entirely to the\n> core, some of its parts can be useful.\n>\n> Unfortunately, I didn't manage to find something that can be applied\n> and tested. cfbot is currently not happy with the patchset.\n> 0001_create_table_storage_v3.patch doesn't apply to the current\n> origin/master manually either:\n>\n> ```\n> error: patch failed: src/backend/parser/gram.y:2318\n> error: src/backend/parser/gram.y: patch does not apply\n> ```\n>\n> Any chance we can see a rebased patchset for the July CF?\n>\n> [1]: https://commitfest.postgresql.org/38/3626/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nHi hackers,We're currently working on rebase along other TOAST improvements, hope to do it in time for July CF.Thank you for your patience.--Best regards,Nikita MalakhovOn Fri, Jun 17, 2022 at 5:33 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> For a pluggable toaster - in previous patch set part 7 patch file contains invalid string.\n> Fixup (v2 file should used instead of previous) patch:\n> 7) 0007_fix_alignment_of_custom_toast_pointers.patch - fixes custom toast pointer's\n> alignment required by bytea toaster by Nikita Glukhov;\n\nI finished digesting the thread and the referred presentations per\nMatthias (cc:'ed) suggestion in [1] discussion. Although the patchset\ngot a fair amount of push-back above, I prefer to stay open minded and\ninvest some of my time into this effort as a tester/reviewer during\nthe July CF. Even if the patchset will not make it entirely to the\ncore, some of its parts can be useful.\n\nUnfortunately, I didn't manage to find something that can be applied\nand tested. cfbot is currently not happy with the patchset.\n0001_create_table_storage_v3.patch doesn't apply to the current\norigin/master manually either:\n\n```\nerror: patch failed: src/backend/parser/gram.y:2318\nerror: src/backend/parser/gram.y: patch does not apply\n```\n\nAny chance we can see a rebased patchset for the July CF?\n\n[1]: https://commitfest.postgresql.org/38/3626/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 23 Jun 2022 14:46:07 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> We're currently working on rebase along other TOAST improvements, hope to do it in time for July CF.\n> Thank you for your patience.\n\nJust to clarify, does it include the dependent \"CREATE TABLE ( ..\nSTORAGE .. )\" patch [1]? I was considering changing the patch\naccording to the feedback it got, but if you are already working on\nthis I'm not going to interfere.\n\n[1]: https://postgr.es/m/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 23 Jun 2022 16:38:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nAlexander, thank you for your feedback and willingness to help. You can\nsend a suggested fixup in this thread, I'll check the issue\nyou've mentioned.\n\nBest regards,\nNikita Malakhov\n\nOn Thu, Jun 23, 2022 at 4:38 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > We're currently working on rebase along other TOAST improvements, hope\n> to do it in time for July CF.\n> > Thank you for your patience.\n>\n> Just to clarify, does it include the dependent \"CREATE TABLE ( ..\n> STORAGE .. )\" patch [1]? I was considering changing the patch\n> according to the feedback it got, but if you are already working on\n> this I'm not going to interfere.\n>\n> [1]: https://postgr.es/m/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n> --\n> Best regards,\n> Aleksander Alekseev\n\nHi,Alexander, thank you for your feedback and willingness to help. You can send a suggested fixup in this thread, I'll check the issue you've mentioned.Best regards,Nikita MalakhovOn Thu, Jun 23, 2022 at 4:38 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> We're currently working on rebase along other TOAST improvements, hope to do it in time for July CF.\n> Thank you for your patience.\n\nJust to clarify, does it include the dependent \"CREATE TABLE ( ..\nSTORAGE .. )\" patch [1]? I was considering changing the patch\naccording to the feedback it got, but if you are already working on\nthis I'm not going to interfere.\n\n[1]: https://postgr.es/m/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 23 Jun 2022 16:53:47 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nHere is the patch set rebased onto current master (15 rel beta 2 with\ncommit from 29.06).\nJust to remind:\nIn Pluggable TOAST we suggest a way to make TOAST pluggable as Storage (in\na way like Pluggable Access Methods) - we extracted\nTOAST mechanics from Heap AM, and made it an independent pluggable and\nextensible part with our freshly developed TOAST API.\nWith this patch set you will be able to develop and plug in your own TOAST\nmechanics for table columns. Knowing internals and/or workflow and workload\nof data being TOASTed makes Custom Toasters much more efficient in\nperformance and storage.\nWe keep backwards compatibility and default TOAST mechanics works as it\nworked previously, working silently with any Toastable datatype\n(and TOASTed values and tables from previous versions, no changes in this)\nand set as default Toaster is not stated otherwise, but through our TOAST\nAPI.\nTOAST API does not have any noticeable overhead in comparison to the\noriginal (master). Proofs in our research materials (measured).\n\nWe've already presented out work at HighLoad, PgCon and PgConf conferences,\nyou can find materials here\nhttp://www.sai.msu.su/~megera/postgres/talks/\n\nWe have ready to plug in extension Toasters\n- bytea appendable toaster for bytea datatype (impressive speedup with\nbytea append operation)\n- JSONB toaster for JSONB (very cool performance improvements when dealing\nwith TOASTed JSONB)\nand prototype Toasters (in development) for PostGIS (much faster then\ndefault with geometric data), large binary objects\n(like pg_largeobject, but much, much larger, and without existing large\nobject limitations), default Toaster implementation without using Indexes.\n\nPatch set consists of 9 incremental patches:\n0001_create_table_storage_v4.patch - SQL syntax fix for CREATE TABLE\nclause, processing SET STORAGE... correctly;\n\n0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax\nallowing creation of custom Toaster (CREATE TOASTER ...)\nand setting Toaster to a table column (CREATE TABLE t (data bytea STORAGE\nEXTERNAL TOASTER bytea_toaster);)\n\n0003_toaster_default_v6.patch - Default TOAST implemented via TOAST API;\n\n0004_toaster_snapshot_v6.patch - refactoring of Default TOAST and support\nfor versioned Toast rows;\n\n0005_bytea_appendable_toaster_v6.patch - contrib module\nbytea_appendable_toaster - special Toaster for bytea datatype with\ncustomized append operation;\n\n0006_toasterapi_docs_v2.patch - documentation package for Pluggable TOAST;\n\n0007_fix_alignment_of_custom_toast_pointers_v2.patch - fixes custom toast\npointer's\nalignment required by bytea toaster by Nikita Glukhov;\n\n0008_fix_toast_tuple_externalize_v2.patch - fixes toast_tuple_externalize\nfunction\nnot to call toast if old data is the same as new one.\n\n0009_bytea_contrib_and_varlena_v1.patch - several late fixups for 0005.\n\nThis patch set opens the following issues:\n1) With TOAST independent of AM it is used by it makes sense to move\ncompression from AM into Toaster and make Compression one of Toaster's\noptions.\nActually, Toasters allow to use any compression methods independently of AM;\n2) Implement default Toaster without using Indexes (currently in\ndevelopment)?\n3) Allows different, SQL-accessed large objects of almost infinite size IN\nDATABASE, unlike current large_object functionality and does not limit\ntheir quantity;\n4) Several already developed Toasters show impressive results for\ndatatypes they were designed for.\n\nWe're gladly appreciate your feedback!\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Thu, Jun 23, 2022 at 4:53 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n> Alexander, thank you for your feedback and willingness to help. You can\n> send a suggested fixup in this thread, I'll check the issue\n> you've mentioned.\n>\n> Best regards,\n> Nikita Malakhov\n>\n> On Thu, Jun 23, 2022 at 4:38 PM Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n>\n>> Hi Nikita,\n>>\n>> > We're currently working on rebase along other TOAST improvements, hope\n>> to do it in time for July CF.\n>> > Thank you for your patience.\n>>\n>> Just to clarify, does it include the dependent \"CREATE TABLE ( ..\n>> STORAGE .. )\" patch [1]? I was considering changing the patch\n>> according to the feedback it got, but if you are already working on\n>> this I'm not going to interfere.\n>>\n>> [1]: https://postgr.es/m/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>\n>", "msg_date": "Thu, 30 Jun 2022 23:26:46 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Rebased onto 15 REL BETA 2", "msg_date": "Thu, 30 Jun 2022 20:27:47 +0000", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Here is the patch set rebased onto current master (15 rel beta 2 with commit from 29.06).\n\nThanks for the rebased patchset.\n\nThis is a huge effort though. I suggest splitting it into several CF\nentries as we previously did with other patches (64-bit XIDs to name\none, which BTW is arguably much simpler, but still we had to split\nit). This will simplify the review, limit the scope of the discussion\nand simplify passing the CI. Cfbot is already not happy with the\npatchset.\n\n0001 - is already in a separate thread [1], that's good. I suggest\nmarking it in the patch description for clarity.\n0002, 0003 - I suggest focusing on these two in this thread and keep\nthe rest of the changes for later discussion. Please submit 0004,\n0005... next time, when we finish with 0001-0003.\n\nThe order of proposed changes IMO is wrong.\n\n0002 should refactor the default TOASTer in a manner similar to a\npluggable one. Nothing should change from the user's perspective. If\nyou do benchmarks, I suggest not to reference the previous talks. I\nfamiliarized myself with all the related talks linked before (took me\nsome time...) and found them useless for the discussion since they\ndon't provide exact steps to reproduce. Please provide exact scripts\nand benchmarks results for 0002 in this thread.\n\n0003 should add an interface that allows replacing the default TOASTer\nwith an alternative one. There is no need for contrib/dummy_toaster\nsimilarly as there is no contrib/ for a dummy TableAM. The provided\nexample doesn't do much anyway since all the heavy lifting should be\ndone in the callbacks implementation. For test purposes please use\nsrc/test/regress/.\n\n[1]: https://www.postgresql.org/message-id/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 1 Jul 2022 11:10:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nAlexander, thank you for your feedback!\nI'd explain our thoughts:\nWe thought that refactoring default TOAST mechanics via TOAST API in p.\n0002 would be too much because the API itself already\nintroduced a lot of changes, so we kept Default Toasters re-implementation\nfor later patch.\n0002 introduces custom Toast pointers with corresponding macro set\n(postgres.h), Dummy toaster as an example for developers\nhow the API should be used, but left default TOAST as-is. As I see, there\nare no lots of custom TAMs, despite of pluggable storage\ninterface introduced several years ago, so we thought that some simple\nexample of how to use the new API would be nice to have.\nWe've done TOAST refactoring in 0003, but it has not replaced default\nTOAST, it just routed it default via TOAST API, but it still\nis left as part of the core, and is used for TOASTing (set\nin atttoaster column) by default.\n\nFor performance testing we used a lot of manually corrected scripts, I have\nto put them in order but I would provide them later as\nadditional side patch for this patch set.\n\nBest regards,\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Fri, Jul 1, 2022 at 11:10 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Here is the patch set rebased onto current master (15 rel beta 2 with\n> commit from 29.06).\n>\n> Thanks for the rebased patchset.\n>\n> This is a huge effort though. I suggest splitting it into several CF\n> entries as we previously did with other patches (64-bit XIDs to name\n> one, which BTW is arguably much simpler, but still we had to split\n> it). This will simplify the review, limit the scope of the discussion\n> and simplify passing the CI. Cfbot is already not happy with the\n> patchset.\n>\n> 0001 - is already in a separate thread [1], that's good. I suggest\n> marking it in the patch description for clarity.\n> 0002, 0003 - I suggest focusing on these two in this thread and keep\n> the rest of the changes for later discussion. Please submit 0004,\n> 0005... next time, when we finish with 0001-0003.\n>\n> The order of proposed changes IMO is wrong.\n>\n> 0002 should refactor the default TOASTer in a manner similar to a\n> pluggable one. Nothing should change from the user's perspective. If\n> you do benchmarks, I suggest not to reference the previous talks. I\n> familiarized myself with all the related talks linked before (took me\n> some time...) and found them useless for the discussion since they\n> don't provide exact steps to reproduce. Please provide exact scripts\n> and benchmarks results for 0002 in this thread.\n>\n> 0003 should add an interface that allows replacing the default TOASTer\n> with an alternative one. There is no need for contrib/dummy_toaster\n> similarly as there is no contrib/ for a dummy TableAM. The provided\n> example doesn't do much anyway since all the heavy lifting should be\n> done in the callbacks implementation. For test purposes please use\n> src/test/regress/.\n>\n> [1]:\n> https://www.postgresql.org/message-id/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nHi,Alexander, thank you for your feedback!I'd explain our thoughts:We thought that refactoring default TOAST mechanics via TOAST API in p. 0002 would be too much because the API itself alreadyintroduced a lot of changes, so we kept Default Toasters re-implementation for later patch. 0002 introduces custom Toast pointers with corresponding macro set (postgres.h), Dummy toaster as an example for developers how the API should be used, but left default TOAST as-is. As I see, there are no lots of custom TAMs, despite of pluggable storage interface introduced several years ago, so we thought that some simple example of how to use the new API would be nice to have.We've done TOAST refactoring in 0003, but it has not replaced default TOAST, it just routed it default via TOAST API, but it stillis left as part of the core, and is used for TOASTing (set in atttoaster column) by default.For performance testing we used a lot of manually corrected scripts, I have to put them in order but I would provide them later asadditional side patch for this patch set.Best regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/On Fri, Jul 1, 2022 at 11:10 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Here is the patch set rebased onto current master (15 rel beta 2 with commit from 29.06).\n\nThanks for the rebased patchset.\n\nThis is a huge effort though. I suggest splitting it into several CF\nentries as we previously did with other patches (64-bit XIDs to name\none, which BTW is arguably much simpler, but still we had to split\nit). This will simplify the review, limit the scope of the discussion\nand simplify passing the CI. Cfbot is already not happy with the\npatchset.\n\n0001 - is already in a separate thread [1], that's good. I suggest\nmarking it in the patch description for clarity.\n0002, 0003 - I suggest focusing on these two in this thread and keep\nthe rest of the changes for later discussion. Please submit 0004,\n0005... next time, when we finish with 0001-0003.\n\nThe order of proposed changes IMO is wrong.\n\n0002 should refactor the default TOASTer in a manner similar to a\npluggable one. Nothing should change from the user's perspective. If\nyou do benchmarks, I suggest not to reference the previous talks. I\nfamiliarized myself with all the related talks linked before (took me\nsome time...) and found them useless for the discussion since they\ndon't provide exact steps to reproduce. Please provide exact scripts\nand benchmarks results for 0002 in this thread.\n\n0003 should add an interface that allows replacing the default TOASTer\nwith an alternative one. There is no need for contrib/dummy_toaster\nsimilarly as there is no contrib/ for a dummy TableAM. The provided\nexample doesn't do much anyway since all the heavy lifting should be\ndone in the callbacks implementation. For test purposes please use\nsrc/test/regress/.\n\n[1]: https://www.postgresql.org/message-id/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 1 Jul 2022 15:14:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Thu, 30 Jun 2022 at 22:26, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi hackers!\n> Here is the patch set rebased onto current master (15 rel beta 2 with commit from 29.06).\n\nThanks!\n\n> Just to remind:\n> With this patch set you will be able to develop and plug in your own TOAST mechanics for table columns. Knowing internals and/or workflow and workload\n> of data being TOASTed makes Custom Toasters much more efficient in performance and storage.\n\nThe new toast API doesn't seem to be very well documented, nor are the\nnew features. Could you include a README or extend the comments on how\nthis is expected to work, and/or how you expect people to use (the\nresult of) `get_vtable`?\n\n> Patch set consists of 9 incremental patches:\n> [...]\n> 0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax allowing creation of custom Toaster (CREATE TOASTER ...)\n> and setting Toaster to a table column (CREATE TABLE t (data bytea STORAGE EXTERNAL TOASTER bytea_toaster);)\n\nThis patch 0002 seems to include changes to log files (!) that don't\nexist in current HEAD, but at the same time are not created by patch\n0001. Could you please check and sanitize your patches to ensure that\nthe changes are actually accurate?\n\nLike Robert Haas mentioned earlier[0], please create a branch in a git\nrepository that has a commit containing the changes for each patch,\nand then use git format-patch to generate a single patchset, one that\nshares a single version number. Keeping track of what patches are\nneeded to test this CF entry is already quite difficult due to the\namount of patches and their packaging (I'm having troubles managing\nthese seperate .patch.gz), and the different version tags definitely\ndon't help in finding the correct set of patches to apply once\ndownloaded.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CA%2BTgmoZBgNipyKuQAJzNw2w7C9z%2B2SMC0SAHqCnc_dG1nSLNcw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 1 Jul 2022 14:27:16 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\nWe have branch with incremental commits worm where patches were generated\nwith format-patch -\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\nI'll clean up commits from garbage files asap, sorry, haven't noticed them\nwhile moving changes.\n\nBest regards,\nNikita Malakhov\n\nOn Fri, Jul 1, 2022 at 3:27 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Thu, 30 Jun 2022 at 22:26, Nikita Malakhov <hukutoc@gmail.com> wrote:\n> >\n> > Hi hackers!\n> > Here is the patch set rebased onto current master (15 rel beta 2 with\n> commit from 29.06).\n>\n> Thanks!\n>\n> > Just to remind:\n> > With this patch set you will be able to develop and plug in your own\n> TOAST mechanics for table columns. Knowing internals and/or workflow and\n> workload\n> > of data being TOASTed makes Custom Toasters much more efficient in\n> performance and storage.\n>\n> The new toast API doesn't seem to be very well documented, nor are the\n> new features. Could you include a README or extend the comments on how\n> this is expected to work, and/or how you expect people to use (the\n> result of) `get_vtable`?\n>\n> > Patch set consists of 9 incremental patches:\n> > [...]\n> > 0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax\n> allowing creation of custom Toaster (CREATE TOASTER ...)\n> > and setting Toaster to a table column (CREATE TABLE t (data bytea\n> STORAGE EXTERNAL TOASTER bytea_toaster);)\n>\n> This patch 0002 seems to include changes to log files (!) that don't\n> exist in current HEAD, but at the same time are not created by patch\n> 0001. Could you please check and sanitize your patches to ensure that\n> the changes are actually accurate?\n>\n> Like Robert Haas mentioned earlier[0], please create a branch in a git\n> repository that has a commit containing the changes for each patch,\n> and then use git format-patch to generate a single patchset, one that\n> shares a single version number. Keeping track of what patches are\n> needed to test this CF entry is already quite difficult due to the\n> amount of patches and their packaging (I'm having troubles managing\n> these seperate .patch.gz), and the different version tags definitely\n> don't help in finding the correct set of patches to apply once\n> downloaded.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\n> [0]\n> https://www.postgresql.org/message-id/CA%2BTgmoZBgNipyKuQAJzNw2w7C9z%2B2SMC0SAHqCnc_dG1nSLNcw%40mail.gmail.com\n>\n\nHi!We have branch with incremental commits worm where patches were generated with format-patch -https://github.com/postgrespro/postgres/tree/toasterapi_cleanI'll clean up commits from garbage files asap, sorry, haven't noticed them while moving changes.Best regards,Nikita MalakhovOn Fri, Jul 1, 2022 at 3:27 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Thu, 30 Jun 2022 at 22:26, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi hackers!\n> Here is the patch set rebased onto current master (15 rel beta 2 with commit from 29.06).\n\nThanks!\n\n> Just to remind:\n> With this patch set you will be able to develop and plug in your own TOAST mechanics for table columns. Knowing internals and/or workflow and workload\n> of data being TOASTed makes Custom Toasters much more efficient in performance and storage.\n\nThe new toast API doesn't seem to be very well documented, nor are the\nnew features. Could you include a README or extend the comments on how\nthis is expected to work, and/or how you expect people to use (the\nresult of) `get_vtable`?\n\n> Patch set consists of 9 incremental patches:\n> [...]\n> 0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax allowing creation of custom Toaster (CREATE TOASTER ...)\n> and setting Toaster to a table column (CREATE TABLE t (data bytea STORAGE EXTERNAL TOASTER bytea_toaster);)\n\nThis patch 0002 seems to include changes to log files (!) that don't\nexist in current HEAD, but at the same time are not created by patch\n0001. Could you please check and sanitize your patches to ensure that\nthe changes are actually accurate?\n\nLike Robert Haas mentioned earlier[0], please create a branch in a git\nrepository that has a commit containing the changes for each patch,\nand then use git format-patch to generate a single patchset, one that\nshares a single version number. Keeping track of what patches are\nneeded to test this CF entry is already quite difficult due to the\namount of patches and their packaging (I'm having troubles managing\nthese seperate .patch.gz), and the different version tags definitely\ndon't help in finding the correct set of patches to apply once\ndownloaded.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CA%2BTgmoZBgNipyKuQAJzNw2w7C9z%2B2SMC0SAHqCnc_dG1nSLNcw%40mail.gmail.com", "msg_date": "Mon, 11 Jul 2022 15:03:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nAccording to previous requests the patch branch was cleaned up from\ngarbage, logs, etc. All conflicts' resolutions were merged\ninto patch commits where they appear, branch was rebased to present one\ncommit for one patch. The branch was actualized,\nand a fresh patch set was generated.\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nWhat we propose in short:\nWe suggest a way to make TOAST pluggable as Storage (in a way like\nPluggable Access\nMethods) - detached TOAST\nmechanics from Heap AM, and made it an independent pluggable and extensible\npart with our freshly developed TOAST API.\nWith this patch set you will be able to develop and plug in your own\nTOAST mechanics\nfor table columns. Knowing internals\nand/or workflow and workload of data being TOASTed makes Custom Toasters much\nmore efficient in performance and storage.\nWe keep backwards compatibility and default TOAST mechanics works as it\nworked previously, working silently with any\nToastable datatype\n(and TOASTed values and tables from previous versions, no changes in this)\nand set as default Toaster is not stated otherwise,\nbut through our TOAST API.\n\nWe've already presented out work at HighLoad, PgCon and PgConf conferences,\nyou can find materials here\nhttp://www.sai.msu.su/~megera/postgres/talks/\nTesting scripts used in talks are a bit scarce and have a lot of\nmanual handling, so it is another bit of work to bunch them into\npatch set, please be patient, I'll try to make it ASAP.\n\nWe have ready to plug in extension Toasters\n- bytea appendable toaster for bytea datatype (impressive speedup with\nbytea append operation) is included in this patch set;\n- JSONB toaster for JSONB (very cool performance improvements when dealing\nwith TOASTed JSONB) will be provided later;\n- Prototype Toasters (in development) for PostGIS (much faster then default\nwith geometric data), large binary objects\n(like pg_largeobject, but much, much larger, and without existing large\nobject limitations), and currently we're checking default\nToaster implementation without using Indexes (direct access by TIDs, up to\n3 times faster than default on smaller values,\nless storage due to absence of index tree).\n\nPatch set consists of 8 incremental patches:\n0001_create_table_storage_v5.patch - SQL syntax fix for CREATE TABLE\nclause, processing SET STORAGE... correctly;\nThis patch is already discussed in a separate thread;\n\n0002_toaster_interface_v8.patch - TOAST API interface and SQL syntax\nallowing creation of custom Toaster (CREATE TOASTER ...)\nand setting Toaster to a table column (CREATE TABLE t (data bytea STORAGE\nEXTERNAL TOASTER bytea_toaster);)\n\n0003_toaster_default_v7.patch - Default TOAST implemented via TOAST API;\n\n0004_toaster_snapshot_v7.patch - refactoring of Default TOAST and support\nfor versioned Toast rows;\n\n0005_bytea_appendable_toaster_v7.patch - contrib module\nbytea_appendable_toaster - special Toaster for bytea datatype with\ncustomized append operation;\n\n0006_toasterapi_docs_v3.patch - documentation package for Pluggable TOAST;\n\n0007_fix_alignment_of_custom_toast_pointers_v3.patch - fixes custom toast\n pointer's\nalignment required by bytea toaster by Nikita Glukhov;\n\n0008_fix_toast_tuple_externalize_v3.patch - fixes toast_tuple_externalize\nfunction\nnot to call toast if old data is the same as new one.\n\nThe example of usage the TOAST API:\nCREATE EXTENSION bytea_toaster;CREATE TABLE test_bytea_append (id int, a\nbytea STORAGE EXTERNAL);\nALTER TABLE test_bytea_append ALTER a SET TOASTER bytea_toaster;\nINSERT INTO test_bytea_append SELECT i, repeat('a', 10000)::bytea FROM\ngenerate_series(1, 10) i;\nUPDATE test_bytea_append SET a = a || repeat('b', 3000)::bytea;\n\nThis patch set opens the following issues:\n1) With TOAST independent of AM it is used by it makes sense to move\ncompression from AM into Toaster and make Compression one of Toaster's\noptions.\nActually, Toasters allow to use any compression methods independently of AM;\n2) Implement default Toaster without using Indexes (currently in\ndevelopment)?\n3) Allows different, SQL-accessed large objects of almost infinite size IN\nDATABASE, unlike current large_object functionality and does not limit\ntheir quantity;\n4) Several already developed Toasters show impressive results for\ndatatypes they were designed for.\n\nWe're awaiting feedback.\n\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Mon, Jul 11, 2022 at 3:03 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi!\n> We have branch with incremental commits worm where patches were generated\n> with format-patch -\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n> I'll clean up commits from garbage files asap, sorry, haven't noticed them\n> while moving changes.\n>\n> Best regards,\n> Nikita Malakhov\n>\n> On Fri, Jul 1, 2022 at 3:27 PM Matthias van de Meent <\n> boekewurm+postgres@gmail.com> wrote:\n>\n>> On Thu, 30 Jun 2022 at 22:26, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>> >\n>> > Hi hackers!\n>> > Here is the patch set rebased onto current master (15 rel beta 2 with\n>> commit from 29.06).\n>>\n>> Thanks!\n>>\n>> > Just to remind:\n>> > With this patch set you will be able to develop and plug in your own\n>> TOAST mechanics for table columns. Knowing internals and/or workflow and\n>> workload\n>> > of data being TOASTed makes Custom Toasters much more efficient in\n>> performance and storage.\n>>\n>> The new toast API doesn't seem to be very well documented, nor are the\n>> new features. Could you include a README or extend the comments on how\n>> this is expected to work, and/or how you expect people to use (the\n>> result of) `get_vtable`?\n>>\n>> > Patch set consists of 9 incremental patches:\n>> > [...]\n>> > 0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax\n>> allowing creation of custom Toaster (CREATE TOASTER ...)\n>> > and setting Toaster to a table column (CREATE TABLE t (data bytea\n>> STORAGE EXTERNAL TOASTER bytea_toaster);)\n>>\n>> This patch 0002 seems to include changes to log files (!) that don't\n>> exist in current HEAD, but at the same time are not created by patch\n>> 0001. Could you please check and sanitize your patches to ensure that\n>> the changes are actually accurate?\n>>\n>> Like Robert Haas mentioned earlier[0], please create a branch in a git\n>> repository that has a commit containing the changes for each patch,\n>> and then use git format-patch to generate a single patchset, one that\n>> shares a single version number. Keeping track of what patches are\n>> needed to test this CF entry is already quite difficult due to the\n>> amount of patches and their packaging (I'm having troubles managing\n>> these seperate .patch.gz), and the different version tags definitely\n>> don't help in finding the correct set of patches to apply once\n>> downloaded.\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>>\n>> [0]\n>> https://www.postgresql.org/message-id/CA%2BTgmoZBgNipyKuQAJzNw2w7C9z%2B2SMC0SAHqCnc_dG1nSLNcw%40mail.gmail.com\n>>\n>\n>\n>", "msg_date": "Wed, 13 Jul 2022 22:45:40 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nWe really need your feedback on the last patchset update!\n\nOn a previous question about TOAST API overhead - please check script in\nattach, we tested INSERT, UPDATE and SELECT\noperations, and ran it on vanilla master and on patched master (vanilla\nwith untouched TOAST implementation and patched\nwith default TOAST implemented via TOAST API, in this patch set - with\npatches up to 0005_bytea_appendable_toaster installed).\nSome of the test scripts will be included in the patch set later, as an\nadditional patch.\n\nCurrently I'm working on an update to the default Toaster (some internal\noptimizations, not affecting functionality)\nand readme files explaining Pluggable TOAST.\n\nAn example of using custom Toaster:\n\nCustom Toaster extension definition (developer):\nCREATE FUNCTION custom_toaster_handler(internal)\nRETURNS toaster_handler\nAS 'MODULE_PATHNAME'\nLANGUAGE C;\n\nCREATE TOASTER custom_toaster HANDLER custom_toaster_handler;\n\nUser's POV:\nCREATE EXTENSION custom_toaster;\nselect * from pg_toaster;\n oid | tsrname | tsrhandler\n-------+----------------+-------------------------\n 9864 | deftoaster | default_toaster_handler\n 32772 | custom_toaster | custom_toaster_handler\n\n\nCREATE TABLE tst1 (\n c1 text STORAGE plain,\n c2 text STORAGE external TOASTER custom_toaster,\n id int4\n);\nALTER TABLE tst1 ALTER COLUMN c1 SET TOASTER custom_toaster;\n=# \\d+ tst1\n Column | Type | Collation | Nullable | Default | Storage | Toaster\n |...\n--------+---------+-----------+----------+---------+----------+----------------+...\n c1 | text | | | | plain | deftoaster\n |...\n c2 | text | | | | external |\ncustom_toaster |...\n id | integer | | | | plain |\n |...\nAccess method: heap\n\nThanks!\n\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Wed, Jul 13, 2022 at 10:45 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n> According to previous requests the patch branch was cleaned up from\n> garbage, logs, etc. All conflicts' resolutions were merged\n> into patch commits where they appear, branch was rebased to present one\n> commit for one patch. The branch was actualized,\n> and a fresh patch set was generated.\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> What we propose in short:\n> We suggest a way to make TOAST pluggable as Storage (in a way like\n> Pluggable Access Methods) - detached TOAST\n> mechanics from Heap AM, and made it an independent pluggable and\n> extensible part with our freshly developed TOAST API.\n> With this patch set you will be able to develop and plug in your own TOAST mechanics\n> for table columns. Knowing internals\n> and/or workflow and workload of data being TOASTed makes Custom Toasters much\n> more efficient in performance and storage.\n> We keep backwards compatibility and default TOAST mechanics works as it\n> worked previously, working silently with any\n> Toastable datatype\n> (and TOASTed values and tables from previous versions, no changes in this)\n> and set as default Toaster is not stated otherwise,\n> but through our TOAST API.\n>\n> We've already presented out work at HighLoad, PgCon and PgConf\n> conferences, you can find materials here\n> http://www.sai.msu.su/~megera/postgres/talks/\n> Testing scripts used in talks are a bit scarce and have a lot of\n> manual handling, so it is another bit of work to bunch them into\n> patch set, please be patient, I'll try to make it ASAP.\n>\n> We have ready to plug in extension Toasters\n> - bytea appendable toaster for bytea datatype (impressive speedup with\n> bytea append operation) is included in this patch set;\n> - JSONB toaster for JSONB (very cool performance improvements when\n> dealing with TOASTed JSONB) will be provided later;\n> - Prototype Toasters (in development) for PostGIS (much faster then\n> default with geometric data), large binary objects\n> (like pg_largeobject, but much, much larger, and without existing large\n> object limitations), and currently we're checking default\n> Toaster implementation without using Indexes (direct access by TIDs, up\n> to 3 times faster than default on smaller values,\n> less storage due to absence of index tree).\n>\n> Patch set consists of 8 incremental patches:\n> 0001_create_table_storage_v5.patch - SQL syntax fix for CREATE TABLE\n> clause, processing SET STORAGE... correctly;\n> This patch is already discussed in a separate thread;\n>\n> 0002_toaster_interface_v8.patch - TOAST API interface and SQL syntax\n> allowing creation of custom Toaster (CREATE TOASTER ...)\n> and setting Toaster to a table column (CREATE TABLE t (data bytea STORAGE\n> EXTERNAL TOASTER bytea_toaster);)\n>\n> 0003_toaster_default_v7.patch - Default TOAST implemented via TOAST API;\n>\n> 0004_toaster_snapshot_v7.patch - refactoring of Default TOAST and support\n> for versioned Toast rows;\n>\n> 0005_bytea_appendable_toaster_v7.patch - contrib module\n> bytea_appendable_toaster - special Toaster for bytea datatype with\n> customized append operation;\n>\n> 0006_toasterapi_docs_v3.patch - documentation package for Pluggable TOAST;\n>\n> 0007_fix_alignment_of_custom_toast_pointers_v3.patch - fixes custom toast\n> pointer's\n> alignment required by bytea toaster by Nikita Glukhov;\n>\n> 0008_fix_toast_tuple_externalize_v3.patch - fixes toast_tuple_externalize\n> function\n> not to call toast if old data is the same as new one.\n>\n> The example of usage the TOAST API:\n> CREATE EXTENSION bytea_toaster;CREATE TABLE test_bytea_append (id int, a\n> bytea STORAGE EXTERNAL);\n> ALTER TABLE test_bytea_append ALTER a SET TOASTER bytea_toaster;\n> INSERT INTO test_bytea_append SELECT i, repeat('a', 10000)::bytea FROM\n> generate_series(1, 10) i;\n> UPDATE test_bytea_append SET a = a || repeat('b', 3000)::bytea;\n>\n> This patch set opens the following issues:\n> 1) With TOAST independent of AM it is used by it makes sense to move\n> compression from AM into Toaster and make Compression one of Toaster's\n> options.\n> Actually, Toasters allow to use any compression methods independently of\n> AM;\n> 2) Implement default Toaster without using Indexes (currently in\n> development)?\n> 3) Allows different, SQL-accessed large objects of almost infinite size IN\n> DATABASE, unlike current large_object functionality and does not limit\n> their quantity;\n> 4) Several already developed Toasters show impressive results for\n> datatypes they were designed for.\n>\n> We're awaiting feedback.\n>\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n> On Mon, Jul 11, 2022 at 3:03 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi!\n>> We have branch with incremental commits worm where patches were generated\n>> with format-patch -\n>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>> I'll clean up commits from garbage files asap, sorry, haven't noticed\n>> them while moving changes.\n>>\n>> Best regards,\n>> Nikita Malakhov\n>>\n>> On Fri, Jul 1, 2022 at 3:27 PM Matthias van de Meent <\n>> boekewurm+postgres@gmail.com> wrote:\n>>\n>>> On Thu, 30 Jun 2022 at 22:26, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>>> >\n>>> > Hi hackers!\n>>> > Here is the patch set rebased onto current master (15 rel beta 2 with\n>>> commit from 29.06).\n>>>\n>>> Thanks!\n>>>\n>>> > Just to remind:\n>>> > With this patch set you will be able to develop and plug in your own\n>>> TOAST mechanics for table columns. Knowing internals and/or workflow and\n>>> workload\n>>> > of data being TOASTed makes Custom Toasters much more efficient in\n>>> performance and storage.\n>>>\n>>> The new toast API doesn't seem to be very well documented, nor are the\n>>> new features. Could you include a README or extend the comments on how\n>>> this is expected to work, and/or how you expect people to use (the\n>>> result of) `get_vtable`?\n>>>\n>>> > Patch set consists of 9 incremental patches:\n>>> > [...]\n>>> > 0002_toaster_interface_v7.patch - TOAST API interface and SQL syntax\n>>> allowing creation of custom Toaster (CREATE TOASTER ...)\n>>> > and setting Toaster to a table column (CREATE TABLE t (data bytea\n>>> STORAGE EXTERNAL TOASTER bytea_toaster);)\n>>>\n>>> This patch 0002 seems to include changes to log files (!) that don't\n>>> exist in current HEAD, but at the same time are not created by patch\n>>> 0001. Could you please check and sanitize your patches to ensure that\n>>> the changes are actually accurate?\n>>>\n>>> Like Robert Haas mentioned earlier[0], please create a branch in a git\n>>> repository that has a commit containing the changes for each patch,\n>>> and then use git format-patch to generate a single patchset, one that\n>>> shares a single version number. Keeping track of what patches are\n>>> needed to test this CF entry is already quite difficult due to the\n>>> amount of patches and their packaging (I'm having troubles managing\n>>> these seperate .patch.gz), and the different version tags definitely\n>>> don't help in finding the correct set of patches to apply once\n>>> downloaded.\n>>>\n>>> Kind regards,\n>>>\n>>> Matthias van de Meent\n>>>\n>>> [0]\n>>> https://www.postgresql.org/message-id/CA%2BTgmoZBgNipyKuQAJzNw2w7C9z%2B2SMC0SAHqCnc_dG1nSLNcw%40mail.gmail.com\n>>>\n>>\n>>\n>>\n>\n>", "msg_date": "Wed, 20 Jul 2022 12:15:41 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nI've reworked the patch set according to recommendations of Aleksander\nAlekseev, Robert Haas\nand Matthias van de Meent, and decided, as it was recommended earlier,\ninclude only the most\nimportant part in the first set. Also, I've added a large README on\nPluggable TOAST to sources,\nI'll be grateful for feedback on this README file. Also, some minor fixes\nwere included in patches.\n\nNow patch set consists of 3 incremental patches:\n0001_create_table_storage_v6.patch - This patch adds important part of SQL\nsyntax fix for\nCREATE TABLE clause, which is mandatory for all Pluggable TOAST\nfunctionality - processing of\nSET STORAGE option correctly.\nThis patch is presented by Teodor Sigaev and is already discussed and\nmarked as ready for commit\nin a separate thread;\n\n0002_toaster_interface_v9.patch - The patch introduces TOAST API interface\nand SQL syntax\nallowing creation of custom Toaster (CREATE TOASTER ...) and assigning\nToaster to a table column\nCREATE TABLE t (data bytea STORAGE EXTERNAL TOASTER bytea_toaster);\nDefault TOAST functionality is left intact, for the sake of not-so-big\none-time changes, and nothing\nchanges from user's perspective, but here user already can develop and plug\nin custom Toasters;\n\n0003_toaster_default_v8.patch - Introducing Default TOAST implemented via\nTOAST API, and\na large README on Pluggable TOAST functionality for developers, put into\n/src/backend/access/toast/README.toastapi\n\nWith a respect to Aleksander Alekseev\n>>There is no need for contrib/dummy_toaster\n>>similarly as there is no contrib/ for a dummy TableAM. The provided\n>>example doesn't do much anyway since all the heavy lifting should be\n>>done in the callbacks implementation.\nwe decided to leave the dummy Toaster as it is, because it's purpose is not\nto show heavy lifting\ndone by internal Toaster implementation, but to demonstrate how Toaster\nextension is defined\nand plugged in.\n\nThank you for your attention and feedbacks!\n\n\nOn Fri, Jul 1, 2022 at 11:10 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Here is the patch set rebased onto current master (15 rel beta 2 with\n> commit from 29.06).\n>\n> Thanks for the rebased patchset.\n>\n> This is a huge effort though. I suggest splitting it into several CF\n> entries as we previously did with other patches (64-bit XIDs to name\n> one, which BTW is arguably much simpler, but still we had to split\n> it). This will simplify the review, limit the scope of the discussion\n> and simplify passing the CI. Cfbot is already not happy with the\n> patchset.\n>\n> 0001 - is already in a separate thread [1], that's good. I suggest\n> marking it in the patch description for clarity.\n> 0002, 0003 - I suggest focusing on these two in this thread and keep\n> the rest of the changes for later discussion. Please submit 0004,\n> 0005... next time, when we finish with 0001-0003.\n>\n> The order of proposed changes IMO is wrong.\n>\n> 0002 should refactor the default TOASTer in a manner similar to a\n> pluggable one. Nothing should change from the user's perspective. If\n> you do benchmarks, I suggest not to reference the previous talks. I\n> familiarized myself with all the related talks linked before (took me\n> some time...) and found them useless for the discussion since they\n> don't provide exact steps to reproduce. Please provide exact scripts\n> and benchmarks results for 0002 in this thread.\n>\n> 0003 should add an interface that allows replacing the default TOASTer\n> with an alternative one. There is no need for contrib/dummy_toaster\n> similarly as there is no contrib/ for a dummy TableAM. The provided\n> example doesn't do much anyway since all the heavy lifting should be\n> done in the callbacks implementation. For test purposes please use\n> src/test/regress/.\n>\n> [1]:\n> https://www.postgresql.org/message-id/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!I've reworked the patch set according to recommendations of Aleksander Alekseev, Robert Haas and Matthias van de Meent, and decided, as it was recommended earlier, include only the mostimportant part in the first set. Also, I've added a large README on Pluggable TOAST to sources,I'll be grateful for feedback on this README file. Also, some minor fixes were included in patches.Now patch set consists of 3 incremental patches:0001_create_table_storage_v6.patch - This patch adds important part of SQL syntax fix for CREATE TABLE clause, which is mandatory for all Pluggable TOAST functionality - processing ofSET STORAGE option correctly.This patch is presented by Teodor Sigaev and is already discussed and marked as ready for commitin a separate thread;0002_toaster_interface_v9.patch - The patch introduces TOAST API interface and SQL syntax allowing creation of custom Toaster (CREATE TOASTER ...) and assigning Toaster to a table column CREATE TABLE t (data bytea STORAGE EXTERNAL TOASTER bytea_toaster);Default TOAST functionality is left intact, for the sake of not-so-big one-time changes, and nothing changes from user's perspective, but here user already can develop and plug in custom Toasters;0003_toaster_default_v8.patch - Introducing Default TOAST implemented via TOAST API, anda large README on Pluggable TOAST functionality for developers, put into/src/backend/access/toast/README.toastapiWith a respect to Aleksander Alekseev>>There is no need for contrib/dummy_toaster>>similarly as there is no contrib/ for a dummy TableAM. The provided>>example doesn't do much anyway since all the heavy lifting should be>>done in the callbacks implementation.we decided to leave the dummy Toaster as it is, because it's purpose is not to show heavy liftingdone by internal Toaster implementation, but to demonstrate how Toaster extension is definedand plugged in.Thank you for your attention and feedbacks!On Fri, Jul 1, 2022 at 11:10 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Here is the patch set rebased onto current master (15 rel beta 2 with commit from 29.06).\n\nThanks for the rebased patchset.\n\nThis is a huge effort though. I suggest splitting it into several CF\nentries as we previously did with other patches (64-bit XIDs to name\none, which BTW is arguably much simpler, but still we had to split\nit). This will simplify the review, limit the scope of the discussion\nand simplify passing the CI. Cfbot is already not happy with the\npatchset.\n\n0001 - is already in a separate thread [1], that's good. I suggest\nmarking it in the patch description for clarity.\n0002, 0003 - I suggest focusing on these two in this thread and keep\nthe rest of the changes for later discussion. Please submit 0004,\n0005... next time, when we finish with 0001-0003.\n\nThe order of proposed changes IMO is wrong.\n\n0002 should refactor the default TOASTer in a manner similar to a\npluggable one. Nothing should change from the user's perspective. If\nyou do benchmarks, I suggest not to reference the previous talks. I\nfamiliarized myself with all the related talks linked before (took me\nsome time...) and found them useless for the discussion since they\ndon't provide exact steps to reproduce. Please provide exact scripts\nand benchmarks results for 0002 in this thread.\n\n0003 should add an interface that allows replacing the default TOASTer\nwith an alternative one. There is no need for contrib/dummy_toaster\nsimilarly as there is no contrib/ for a dummy TableAM. The provided\nexample doesn't do much anyway since all the heavy lifting should be\ndone in the callbacks implementation. For test purposes please use\nsrc/test/regress/.\n\n[1]: https://www.postgresql.org/message-id/de83407a-ae3d-a8e1-a788-920eb334f25b%40sigaev.ru\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Fri, 22 Jul 2022 14:05:30 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> I've reworked the patch set according to recommendations of Aleksander Alekseev, Robert Haas\n> and Matthias van de Meent, and decided, as it was recommended earlier, include only the most\n> important part in the first set. Also, I've added a large README on Pluggable TOAST to sources,\n> I'll be grateful for feedback on this README file. Also, some minor fixes were included in patches.\n\nMany thanks for accounting for the previous feedback. I will take a\nlook somewhere early next week.\n\n> 0001 [...] This patch is presented by Teodor Sigaev and is already discussed and marked as ready for commit\nin a separate thread;\n\nFYI it was merged, see 784cedda [1].\n\nAlso, you forgot the attachments :)\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=784cedda\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 22 Jul 2022 14:54:22 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Wed, 20 Jul 2022 at 11:16, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi hackers!\n\nHi,\n\nPlease don't top-post here. See\nhttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics.\n\n> We really need your feedback on the last patchset update!\n\nThis is feedback on the latest version that was shared on the mailing\nlist here [0]. Your mail from today didn't seem to have an attachment,\nand I haven't checked the git repository for changes.\n\n0001: Create Table Storage:\nLGTM\n\n---\n\n0002: Toaster API interface\n\n> tablecmds.c\n\n> SetIndexStorageProperties(Relation rel, Relation attrelation,\n> AttrNumber attnum,\n> bool setstorage, char newstorage,\n> bool setcompression, char newcompression,\n> + bool settoaster, Oid toasterOid,\n> LOCKMODE lockmode)\n\nIndexes cannot index toasted values, so why would the toaster oid be\ninteresting for index storage properties?\n\n> static List *MergeAttributes(List *schema, List *supers, char relpersistence,\n> - bool is_partition, List **supconstr);\n> + bool is_partition, List **supconstr,\n> + Oid accessMethodId);\n\n> toasterapi.h:\n\n> +SearchTsrCache(Oid toasterOid)\n> ...\n> + for_each_from(lc, ToasterCache, 0)\n> + {\n> + entry = (ToasterCacheEntry*)lfirst(lc);\n> +\n> + if (entry->toasterOid == toasterOid)\n> + {\n> + /* remove entry from list, it will be added in a head of list below */\n> + foreach_delete_current(ToasterCache, lc);\n> + goto out;\n> + }\n> + }\n\nMoving toasters seems quite expensive when compared to just index\nchecks. When you have many toasters, but only a few hot ones, this\ncurrently will move the \"cold\" toasters around a lot. Why not use a\nstack instead (or alternatively, a 'zipper' (or similar) data\nstructure), where the hottest toaster is on top, so that we avoid\nlarger memmove calls?\n\n> postgres.h\n\n> +/* varatt_custom uses 16bit aligment */\nTo the best of my knowledge varlena-pointers are unaligned; and this\nis affirmed by the comment immediately under varattrib_1b_e. Assuming\nalignment to 16 bits should/would be incorrect in some of the cases.\nThis is handled for normal varatt_external values by memcpy-ing the\nvalue into local (aligned) fields before using them, but that doesn't\nseem to be the case for varatt_custom?\n\n---\n\n0003: (re-implement default toaster using toaster interface)\n\nI see significant changes to the dummy toaster (created in 0002),\ncould those be moved to patch 0002 in the next iteration?\n\ndetoast.c and detoast.h are effectively empty after this patch (only\nimports and commented-out code remain), please fully remove them\ninstead - that saves on patch diff size.\n\nWith the new API, I'm getting confused about the location of the\nvarious toast_* functions. They are spread around in various files\nthat have no clear distinction on why it is (still) located there:\nsome functions are moved to access/toast/*, while others are moved\naround in catalog/toasting.c, access/common/toast_internals.c and\naccess/table/toast_helper.c.\n\n> detoast.c / tableam.h\nAccording to a quick search, all core usage of\ntable_relation_fetch_toast_slice is removed. Shouldn't we remove that\ntableAM API (+ heapam implementation) instead of updating and\nmaintaining it? Same question for table_relation_toast_am - I'm not\nsure that it remains the correct way of dealing with toast.\n\n> toast_helper.c\n\ntoast_delete_external_datum:\nPlease clean up code that was commented out from the patches, it\ndetracts from the readability of a patch.\n\n> toast_internals.c\n\nThis seems like a bit of a mess, considering the lack of\nCan't we split this up into a heaptoast (or whatever we're going to\ncall the default toaster) and actual toast internals? It seems to me\nthat\n\n> generic_toaster.c\n\nCould you align name styles in this new file? It has both camelCase\nand snake_case for function names.\n\n> toasting.c\n\nI'm not entirely sure that we should retain catalog/toasting.c if we\nare going to depend on the custom toaster API. Shouldn't the creation\nof toast tables be delegated to the toaster?\n\n> + * toast_get_valid_index\n> + *\n> + * Get OID of valid index associated to given toast relation. A toast\n> + * relation can have only one valid index at the same time.\n\nAlthough this is code being moved around, the comment is demonstrably\nfalse: A cancelled REINDEX CONCURRENTLY with a subsequent REINDEX can\nleave a toast relation with 2 valid indexes.\n\n---\n\n0004: refactoring and optimization of default toaster\n0005: bytea appendable toaster\n\nI dind't review these yet.\n\n---\n\n0006: docs\n\nSeems like a good start, but I'm not sure that we need the struct\ndefinition in the docs. I think the BRIN extensibility docs [1] are a\nbetter example on what I think the documentation for this API should\nlook like.\n\n---\n\n0007: fix alignment of custom toast pointers\nThis is not a valid fix for the alignment requirement for custom toast\npointers. You now leave one byte empty if you are not already aligned,\nwhich for on-disk toast pointers means that we're dealing with a\n4-byte aligned value, which is not the case because this is a\n2-byte-aligned value.\n\nRegardless, this should be merged into 0002, not remain a seperate patch.\n\n---\n\n0008: fix tuple externalization\nShould be merged into the relevant patch as well, not as a separate patch.\n\n---\n\n> Currently I'm working on an update to the default Toaster (some internal optimizations, not affecting functionality)\n> and readme files explaining Pluggable TOAST.\n\nThat would be greatly appreciated - 0006 does not cover why we need\nvtable, nor how it's expected to be used in type-aware code.\n\n\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAN-LCVNkU%2Bkdieu4i_BDnLgGszNY1RCnL6Dsrdz44fY7FOG3vg%40mail.gmail.com\n[1] https://www.postgresql.org/docs/15/brin-extensibility.html\n\n\n", "msg_date": "Fri, 22 Jul 2022 15:16:56 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nMatthias, thank you very much for your feedback!\nSorry, I forgot to attach files.\nAttaching here, but they are for the commit tagged \"15beta2\", I am\ncurrently\nrebasing this branch onto the actual master and will provide rebased\nversion,\nwith some corrections according to your feedback, in a day or two.\n\n>Indexes cannot index toasted values, so why would the toaster oid be\n>interesting for index storage properties?\n\nHere Teodor might correct me. Toast tables are indexed, and Heap TOAST\nmechanics accesses Toasted tuples by index, isn't it the case?\n\n>Moving toasters seems quite expensive when compared to just index\n>checks. When you have many toasters, but only a few hot ones, this\n>currently will move the \"cold\" toasters around a lot. Why not use a\n>stack instead (or alternatively, a 'zipper' (or similar) data\n>structure), where the hottest toaster is on top, so that we avoid\n>larger memmove calls?\n\nThis is a reasonable remark, we'll consider it for the next iteration. Our\nreason\nis that we think there won't be a lot of custom Toasters, most likely less\nthen\na dozen, for the most complex/heavy datatypes so we haven't considered\nthese expenses.\n\n>To the best of my knowledge varlena-pointers are unaligned; and this\n>is affirmed by the comment immediately under varattrib_1b_e. Assuming\n>alignment to 16 bits should/would be incorrect in some of the cases.\n>This is handled for normal varatt_external values by memcpy-ing the\n>value into local (aligned) fields before using them, but that doesn't\n>seem to be the case for varatt_custom?\n\nAlignment correction seemed reasonable for us because structures are\nanyway aligned in memory, so when we use 1 and 2-byte fields along\nwith 4-byte it may create a lot of padding. Am I wrong? Also, correct\nalignment somewhat simplifies access to structure fields.\n\n>0003: (re-implement default toaster using toaster interface)\n>I see significant changes to the dummy toaster (created in 0002),\n>could those be moved to patch 0002 in the next iteration?\nWill do.\n\n>detoast.c and detoast.h are effectively empty after this patch (only\n>imports and commented-out code remain), please fully remove them\n>instead - that saves on patch diff size.\nWill do.\n\nAbout the location of toast_ functions: these functions are part of Heap\nTOAST mechanics, and they were scattered among other Heap internals\nsources. I've tried to gather them and put them in more straight order, but\nthis work is not fully finished yet and will take some time. Working on it.\n\nI'll check if table_relation_fetch_toast_slice could be removed, thanks for\nthe remark.\n\ntoast_helper - done, will be provided in rebased version.\n\ntoast_internals - this one is an internal part of TOAST implemented in\nHeap AM, but I'll try to straighten it out as much as I could.\n\nnaming conventions in some sources - done, will be provided in rebased\npatch set.\n\n>Shouldn't the creation of toast tables be delegated to the toaster?\n\nYes, you're right, and actually, it is. I'll check that and correct in\nrebased\nversion.\n\n>Although this is code being moved around, the comment is demonstrably\n>false: A cancelled REINDEX CONCURRENTLY with a subsequent REINDEX can\n>leave a toast relation with 2 valid indexes.\n\nThis code is quite old, we've not changed it but thanks for the remark,\nI'll check it more carefully.\n\nSmall fixes are already merged into larger patches in attached files. Also,\nI appreciate your feedback on documentation - if you would have an\nopportunity\nplease check README provided in 0003. I've took your comments on\ndocumentation\ninto account and will include corrections according to them into rebased\npatch.\n\nAs Aleksander recommended, I've shortened the patch set and left only the\nmost\nimportant part - API and re-implemented default Toast. All bells and\nwhistles are not\nof so much importance and could be sent later after the API itself will be\nstraightened\nout and commited.\n\nThank you very much!\n\nOn Fri, Jul 22, 2022 at 4:17 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Wed, 20 Jul 2022 at 11:16, Nikita Malakhov <hukutoc@gmail.com> wrote:\n> >\n> > Hi hackers!\n>\n> Hi,\n>\n> Please don't top-post here. See\n> https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics.\n>\n> > We really need your feedback on the last patchset update!\n>\n> This is feedback on the latest version that was shared on the mailing\n> list here [0]. Your mail from today didn't seem to have an attachment,\n> and I haven't checked the git repository for changes.\n>\n> 0001: Create Table Storage:\n> LGTM\n>\n> ---\n>\n> 0002: Toaster API interface\n>\n> > tablecmds.c\n>\n> > SetIndexStorageProperties(Relation rel, Relation attrelation,\n> > AttrNumber attnum,\n> > bool setstorage, char newstorage,\n> > bool setcompression, char newcompression,\n> > + bool settoaster, Oid toasterOid,\n> > LOCKMODE lockmode)\n>\n> Indexes cannot index toasted values, so why would the toaster oid be\n> interesting for index storage properties?\n>\n> > static List *MergeAttributes(List *schema, List *supers, char\n> relpersistence,\n> > - bool is_partition, List **supconstr);\n> > + bool is_partition, List **supconstr,\n> > + Oid accessMethodId);\n>\n> > toasterapi.h:\n>\n> > +SearchTsrCache(Oid toasterOid)\n> > ...\n> > + for_each_from(lc, ToasterCache, 0)\n> > + {\n> > + entry = (ToasterCacheEntry*)lfirst(lc);\n> > +\n> > + if (entry->toasterOid == toasterOid)\n> > + {\n> > + /* remove entry from list, it will be added in a head of\n> list below */\n> > + foreach_delete_current(ToasterCache, lc);\n> > + goto out;\n> > + }\n> > + }\n>\n> Moving toasters seems quite expensive when compared to just index\n> checks. When you have many toasters, but only a few hot ones, this\n> currently will move the \"cold\" toasters around a lot. Why not use a\n> stack instead (or alternatively, a 'zipper' (or similar) data\n> structure), where the hottest toaster is on top, so that we avoid\n> larger memmove calls?\n>\n> > postgres.h\n>\n> > +/* varatt_custom uses 16bit aligment */\n> To the best of my knowledge varlena-pointers are unaligned; and this\n> is affirmed by the comment immediately under varattrib_1b_e. Assuming\n> alignment to 16 bits should/would be incorrect in some of the cases.\n> This is handled for normal varatt_external values by memcpy-ing the\n> value into local (aligned) fields before using them, but that doesn't\n> seem to be the case for varatt_custom?\n>\n> ---\n>\n> 0003: (re-implement default toaster using toaster interface)\n>\n> I see significant changes to the dummy toaster (created in 0002),\n> could those be moved to patch 0002 in the next iteration?\n>\n> detoast.c and detoast.h are effectively empty after this patch (only\n> imports and commented-out code remain), please fully remove them\n> instead - that saves on patch diff size.\n>\n> With the new API, I'm getting confused about the location of the\n> various toast_* functions. They are spread around in various files\n> that have no clear distinction on why it is (still) located there:\n> some functions are moved to access/toast/*, while others are moved\n> around in catalog/toasting.c, access/common/toast_internals.c and\n> access/table/toast_helper.c.\n>\n> > detoast.c / tableam.h\n> According to a quick search, all core usage of\n> table_relation_fetch_toast_slice is removed. Shouldn't we remove that\n> tableAM API (+ heapam implementation) instead of updating and\n> maintaining it? Same question for table_relation_toast_am - I'm not\n> sure that it remains the correct way of dealing with toast.\n>\n> > toast_helper.c\n>\n> toast_delete_external_datum:\n> Please clean up code that was commented out from the patches, it\n> detracts from the readability of a patch.\n>\n> > toast_internals.c\n>\n> This seems like a bit of a mess, considering the lack of\n> Can't we split this up into a heaptoast (or whatever we're going to\n> call the default toaster) and actual toast internals? It seems to me\n> that\n>\n> > generic_toaster.c\n>\n> Could you align name styles in this new file? It has both camelCase\n> and snake_case for function names.\n>\n> > toasting.c\n>\n> I'm not entirely sure that we should retain catalog/toasting.c if we\n> are going to depend on the custom toaster API. Shouldn't the creation\n> of toast tables be delegated to the toaster?\n>\n> > + * toast_get_valid_index\n> > + *\n> > + * Get OID of valid index associated to given toast relation. A toast\n> > + * relation can have only one valid index at the same time.\n>\n> Although this is code being moved around, the comment is demonstrably\n> false: A cancelled REINDEX CONCURRENTLY with a subsequent REINDEX can\n> leave a toast relation with 2 valid indexes.\n>\n> ---\n>\n> 0004: refactoring and optimization of default toaster\n> 0005: bytea appendable toaster\n>\n> I dind't review these yet.\n>\n> ---\n>\n> 0006: docs\n>\n> Seems like a good start, but I'm not sure that we need the struct\n> definition in the docs. I think the BRIN extensibility docs [1] are a\n> better example on what I think the documentation for this API should\n> look like.\n>\n> ---\n>\n> 0007: fix alignment of custom toast pointers\n> This is not a valid fix for the alignment requirement for custom toast\n> pointers. You now leave one byte empty if you are not already aligned,\n> which for on-disk toast pointers means that we're dealing with a\n> 4-byte aligned value, which is not the case because this is a\n> 2-byte-aligned value.\n>\n> Regardless, this should be merged into 0002, not remain a seperate patch.\n>\n> ---\n>\n> 0008: fix tuple externalization\n> Should be merged into the relevant patch as well, not as a separate patch.\n>\n> ---\n>\n> > Currently I'm working on an update to the default Toaster (some internal\n> optimizations, not affecting functionality)\n> > and readme files explaining Pluggable TOAST.\n>\n> That would be greatly appreciated - 0006 does not cover why we need\n> vtable, nor how it's expected to be used in type-aware code.\n>\n>\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\n> [0]\n> https://www.postgresql.org/message-id/CAN-LCVNkU%2Bkdieu4i_BDnLgGszNY1RCnL6Dsrdz44fY7FOG3vg%40mail.gmail.com\n> [1] https://www.postgresql.org/docs/15/brin-extensibility.html\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Sat, 23 Jul 2022 10:15:05 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nHere is the patch set rebased to master from 22.07. I've got some trouble\nrebasing it due to\nconflicts, so it took some time.\nI've made some corrections according to Matthias and Aleksander comments,\nthough not all\nof them, because some require refactoring of very old code and would have\nto take much more\neffort. I keep these recommended corrections in mind and already working on\nthem but it will\nrequire extensive testing and some more work, so the will be presented\nlater, in next iteration\nor next patch - these are optimization of heap AM according\ntable_relation_fetch_toast_slice,\ndouble-index problem and I'm continue to straighten out code related to\nTOAST functionality.\nIt's quite a task because as I mentioned before, this core was scattered\nover Heap AM and\nreference implementation of TOAST is very tightly intertwined with Heap AM\nitself. Default\ntoaster uses Heap AM storage so it is unlikely that it will be possible to\nfully detach it from\nHeap.\n\nHowever, I've made some more refactoring, removed empty sources, corrected\ncode according\nto naming conventions, and extended README.toastapi document.\n\n0002_toaster_interface_v10 contains TOAST API with Dummy toaster as an\nexample (but I would,\nas recommended, remove Dummy toaster and provide it as an extension), and\ndefault Toaster\nwas left as-is (reference implementation).\n\n0003_toaster_default_v9 implements reference TOAST as Default Toaster via\nTOAST API,\nso Heap AM calls Toast only via API, and does not have direct calls to\nToast functionality.\n\n0004_toaster_snapshot_v8 continues refactoring and has some important\nchanges (added\ninto README.toastapi new part related TOAST API extensibility - the virtual\nfunctions table).\n\nAlso, I'll provide documentation package corrected according to Matthias'\nremarks later,\nin the next patch set.\n\nPlease check attached patch set.\nAlso, GIT branch with this patch resides here:\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nThank all reviewers for feedback!\n\nOn Sat, Jul 23, 2022 at 10:15 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> Matthias, thank you very much for your feedback!\n> Sorry, I forgot to attach files.\n> Attaching here, but they are for the commit tagged \"15beta2\", I am\n> currently\n> rebasing this branch onto the actual master and will provide rebased\n> version,\n> with some corrections according to your feedback, in a day or two.\n>\n> >Indexes cannot index toasted values, so why would the toaster oid be\n> >interesting for index storage properties?\n>\n> Here Teodor might correct me. Toast tables are indexed, and Heap TOAST\n> mechanics accesses Toasted tuples by index, isn't it the case?\n>\n> >Moving toasters seems quite expensive when compared to just index\n> >checks. When you have many toasters, but only a few hot ones, this\n> >currently will move the \"cold\" toasters around a lot. Why not use a\n> >stack instead (or alternatively, a 'zipper' (or similar) data\n> >structure), where the hottest toaster is on top, so that we avoid\n> >larger memmove calls?\n>\n> This is a reasonable remark, we'll consider it for the next iteration. Our\n> reason\n> is that we think there won't be a lot of custom Toasters, most likely less\n> then\n> a dozen, for the most complex/heavy datatypes so we haven't considered\n> these expenses.\n>\n> >To the best of my knowledge varlena-pointers are unaligned; and this\n> >is affirmed by the comment immediately under varattrib_1b_e. Assuming\n> >alignment to 16 bits should/would be incorrect in some of the cases.\n> >This is handled for normal varatt_external values by memcpy-ing the\n> >value into local (aligned) fields before using them, but that doesn't\n> >seem to be the case for varatt_custom?\n>\n> Alignment correction seemed reasonable for us because structures are\n> anyway aligned in memory, so when we use 1 and 2-byte fields along\n> with 4-byte it may create a lot of padding. Am I wrong? Also, correct\n> alignment somewhat simplifies access to structure fields.\n>\n> >0003: (re-implement default toaster using toaster interface)\n> >I see significant changes to the dummy toaster (created in 0002),\n> >could those be moved to patch 0002 in the next iteration?\n> Will do.\n>\n> >detoast.c and detoast.h are effectively empty after this patch (only\n> >imports and commented-out code remain), please fully remove them\n> >instead - that saves on patch diff size.\n> Will do.\n>\n> About the location of toast_ functions: these functions are part of Heap\n> TOAST mechanics, and they were scattered among other Heap internals\n> sources. I've tried to gather them and put them in more straight order,\n> but\n> this work is not fully finished yet and will take some time. Working on it.\n>\n> I'll check if table_relation_fetch_toast_slice could be removed, thanks for\n> the remark.\n>\n> toast_helper - done, will be provided in rebased version.\n>\n> toast_internals - this one is an internal part of TOAST implemented in\n> Heap AM, but I'll try to straighten it out as much as I could.\n>\n> naming conventions in some sources - done, will be provided in rebased\n> patch set.\n>\n> >Shouldn't the creation of toast tables be delegated to the toaster?\n>\n> Yes, you're right, and actually, it is. I'll check that and correct in\n> rebased\n> version.\n>\n> >Although this is code being moved around, the comment is demonstrably\n> >false: A cancelled REINDEX CONCURRENTLY with a subsequent REINDEX can\n> >leave a toast relation with 2 valid indexes.\n>\n> This code is quite old, we've not changed it but thanks for the remark,\n> I'll check it more carefully.\n>\n> Small fixes are already merged into larger patches in attached files. Also,\n> I appreciate your feedback on documentation - if you would have an\n> opportunity\n> please check README provided in 0003. I've took your comments on\n> documentation\n> into account and will include corrections according to them into rebased\n> patch.\n>\n> As Aleksander recommended, I've shortened the patch set and left only the\n> most\n> important part - API and re-implemented default Toast. All bells and\n> whistles are not\n> of so much importance and could be sent later after the API itself will be\n> straightened\n> out and commited.\n>\n> Thank you very much!\n>\n> --\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 26 Jul 2022 00:20:23 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\nThanks for an update!\n\n> 0002_toaster_interface_v10 contains TOAST API with Dummy toaster as an example (but I would,\n> as recommended, remove Dummy toaster and provide it as an extension), and default Toaster\n> was left as-is (reference implementation).\n>\n> 0003_toaster_default_v9 implements reference TOAST as Default Toaster via TOAST API,\n> so Heap AM calls Toast only via API, and does not have direct calls to Toast functionality.\n>\n> 0004_toaster_snapshot_v8 continues refactoring and has some important changes (added\n> into README.toastapi new part related TOAST API extensibility - the virtual functions table).\n\nThis numbering is confusing. Please use a command like:\n\n```\ngit format-patch origin/master -v 42\n```\n\nThis will produce a patchset with a consistent naming like:\n\n```\nv42-0001-foo-bar.patch\nv42-0002-baz-qux.patch\n... etc ...\n```\n\nAlso cfbot [1] will know in which order to apply them.\n\n> GIT branch with this patch resides here:\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nUnfortunately the three patches in question from this branch don't\npass `make check`. Please update\nsrc/test/regress/expected/publication.out and make sure the patchset\npasses the rest of the tests at least on one platform before\nsubmitting.\n\nPersonally I have a little set of scripts for this [2]. The following\ncommands should pass:\n\n```\n# quick check\n./quick-build.sh && ./single-install.sh && make installcheck\n\n# full check\n./full-build.sh && ./single-install.sh && make installcheck-world\n```\n\nFinally, please update the commit messages. Each commit message should\ninclude a brief description (one line) , a detailed description (the\nbody), and also the list of the authors, the reviewers and a link to\nthe discussion. Please use [3] as a template.\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/afiskon/pgscripts/\n[3]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=784cedda0604ee4ac731fd0b00cd8b27e78c02d3\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 26 Jul 2022 11:22:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nAleksander, thanks for the remark, seems that we've missed recent change -\nthe pubication\ntest does not have the new column 'Toaster'. Will send a corrected patch\ntomorrow. Also, thanks\nfor the patch name note, I've changed it as you suggested.\nI'm on vacation, so I read emails not very often and answers take some\ntime, sorry.\n\n\nOn Tue, Jul 26, 2022 at 11:23 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> Thanks for an update!\n>\n> > 0002_toaster_interface_v10 contains TOAST API with Dummy toaster as an\n> example (but I would,\n> > as recommended, remove Dummy toaster and provide it as an extension),\n> and default Toaster\n> > was left as-is (reference implementation).\n> >\n> > 0003_toaster_default_v9 implements reference TOAST as Default Toaster\n> via TOAST API,\n> > so Heap AM calls Toast only via API, and does not have direct calls to\n> Toast functionality.\n> >\n> > 0004_toaster_snapshot_v8 continues refactoring and has some important\n> changes (added\n> > into README.toastapi new part related TOAST API extensibility - the\n> virtual functions table).\n>\n> This numbering is confusing. Please use a command like:\n>\n> ```\n> git format-patch origin/master -v 42\n> ```\n>\n> This will produce a patchset with a consistent naming like:\n>\n> ```\n> v42-0001-foo-bar.patch\n> v42-0002-baz-qux.patch\n> ... etc ...\n> ```\n>\n> Also cfbot [1] will know in which order to apply them.\n>\n> > GIT branch with this patch resides here:\n> > https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> Unfortunately the three patches in question from this branch don't\n> pass `make check`. Please update\n> src/test/regress/expected/publication.out and make sure the patchset\n> passes the rest of the tests at least on one platform before\n> submitting.\n>\n> Personally I have a little set of scripts for this [2]. The following\n> commands should pass:\n>\n> ```\n> # quick check\n> ./quick-build.sh && ./single-install.sh && make installcheck\n>\n> # full check\n> ./full-build.sh && ./single-install.sh && make installcheck-world\n> ```\n>\n> Finally, please update the commit messages. Each commit message should\n> include a brief description (one line) , a detailed description (the\n> body), and also the list of the authors, the reviewers and a link to\n> the discussion. Please use [3] as a template.\n>\n> [1]: http://cfbot.cputube.org/\n> [2]: https://github.com/afiskon/pgscripts/\n> [3]:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=784cedda0604ee4ac731fd0b00cd8b27e78c02d3\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Aleksander, thanks for the remark, seems that we've missed recent change - the pubicationtest does not have the new column 'Toaster'. Will send a corrected patch tomorrow. Also, thanksfor the patch name note, I've changed it as you suggested.I'm on vacation, so I read emails not very often and answers take some time, sorry.On Tue, Jul 26, 2022 at 11:23 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\nThanks for an update!\n\n> 0002_toaster_interface_v10 contains TOAST API with Dummy toaster as an example (but I would,\n> as recommended, remove Dummy toaster and provide it as an extension), and default Toaster\n> was left as-is (reference implementation).\n>\n> 0003_toaster_default_v9 implements reference TOAST as Default Toaster via TOAST API,\n> so Heap AM calls Toast only via API, and does not have direct calls to Toast functionality.\n>\n> 0004_toaster_snapshot_v8 continues refactoring and has some important changes (added\n> into README.toastapi new part related TOAST API extensibility - the virtual functions table).\n\nThis numbering is confusing. Please use a command like:\n\n```\ngit format-patch origin/master -v 42\n```\n\nThis will produce a patchset with a consistent naming like:\n\n```\nv42-0001-foo-bar.patch\nv42-0002-baz-qux.patch\n... etc ...\n```\n\nAlso cfbot [1] will know in which order to apply them.\n\n> GIT branch with this patch resides here:\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nUnfortunately the three patches in question from this branch don't\npass `make check`. Please update\nsrc/test/regress/expected/publication.out and make sure the patchset\npasses the rest of the tests at least on one platform before\nsubmitting.\n\nPersonally I have a little set of scripts for this [2]. The following\ncommands should pass:\n\n```\n# quick check\n./quick-build.sh && ./single-install.sh && make installcheck\n\n# full check\n./full-build.sh && ./single-install.sh && make installcheck-world\n```\n\nFinally, please update the commit messages. Each commit message should\ninclude a brief description (one line) , a detailed description (the\nbody), and also the list of the authors, the reviewers and a link to\nthe discussion. Please use [3] as a template.\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/afiskon/pgscripts/\n[3]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=784cedda0604ee4ac731fd0b00cd8b27e78c02d3\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Fri, 29 Jul 2022 09:16:08 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nSorry for the delay.\nI've made changes according to Aleksander comments:\n\n>This will produce a patchset with a consistent naming like:\n>...\n>Also cfbot [1] will know in which order to apply them.\n\nDone.\n\n>Unfortunately the three patches in question from this branch don't\n>pass `make check`. Please update\n>src/test/regress/expected/publication.out and make sure the patchset\n>passes the rest of the tests at least on one platform before\n>submitting.\n\nDone, there was absent column in Publication tests.\n\n>Finally, please update the commit messages. Each commit message should\n>include a brief description (one line) , a detailed description (the\n>body), and also the list of the authors, the reviewers and a link to\n>the discussion. Please use [3] as a template.\n\nDone.\n\nLink to the rebased branch:\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\nPlease check.\n\nAttach includes:\nv11-0002-toaster-interface.patch - contains TOAST API with default Toaster\nleft as-is (reference implementation) and Dummy toaster as an example\n(will be removed later as a part of refactoring?).\n\nv11-0003-toaster-default.patch - implements reference TOAST as Default\nToaster\nvia TOAST API, so Heap AM calls Toast only via API, and does not have direct\ncalls to Toast functionality.\n\nv11-0004-toaster-snapshot.patch - supports row versioning for TOASTed values\nand some refactoring.\n\nAleksander, thank you and Matthias for the very helpful feedback!\n\nOn Fri, Jul 29, 2022 at 9:16 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> Aleksander, thanks for the remark, seems that we've missed recent change -\n> the pubication\n> test does not have the new column 'Toaster'. Will send a corrected patch\n> tomorrow. Also, thanks\n> for the patch name note, I've changed it as you suggested.\n> I'm on vacation, so I read emails not very often and answers take some\n> time, sorry.\n>\n>\n> On Tue, Jul 26, 2022 at 11:23 AM Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n>\n>> Hi Nikita,\n>>\n>> Thanks for an update!\n>>\n>> > 0002_toaster_interface_v10 contains TOAST API with Dummy toaster as an\n>> example (but I would,\n>> > as recommended, remove Dummy toaster and provide it as an extension),\n>> and default Toaster\n>> > was left as-is (reference implementation).\n>> >\n>> > 0003_toaster_default_v9 implements reference TOAST as Default Toaster\n>> via TOAST API,\n>> > so Heap AM calls Toast only via API, and does not have direct calls to\n>> Toast functionality.\n>> >\n>> > 0004_toaster_snapshot_v8 continues refactoring and has some important\n>> changes (added\n>> > into README.toastapi new part related TOAST API extensibility - the\n>> virtual functions table).\n>>\n>> This numbering is confusing. Please use a command like:\n>>\n>> ```\n>> git format-patch origin/master -v 42\n>> ```\n>>\n>> This will produce a patchset with a consistent naming like:\n>>\n>> ```\n>> v42-0001-foo-bar.patch\n>> v42-0002-baz-qux.patch\n>> ... etc ...\n>> ```\n>>\n>> Also cfbot [1] will know in which order to apply them.\n>>\n>> > GIT branch with this patch resides here:\n>> > https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>\n>> Unfortunately the three patches in question from this branch don't\n>> pass `make check`. Please update\n>> src/test/regress/expected/publication.out and make sure the patchset\n>> passes the rest of the tests at least on one platform before\n>> submitting.\n>>\n>> Personally I have a little set of scripts for this [2]. The following\n>> commands should pass:\n>>\n>> ```\n>> # quick check\n>> ./quick-build.sh && ./single-install.sh && make installcheck\n>>\n>> # full check\n>> ./full-build.sh && ./single-install.sh && make installcheck-world\n>> ```\n>>\n>> Finally, please update the commit messages. Each commit message should\n>> include a brief description (one line) , a detailed description (the\n>> body), and also the list of the authors, the reviewers and a link to\n>> the discussion. Please use [3] as a template.\n>>\n>> [1]: http://cfbot.cputube.org/\n>> [2]: https://github.com/afiskon/pgscripts/\n>> [3]:\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=784cedda0604ee4ac731fd0b00cd8b27e78c02d3\n>>\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 2 Aug 2022 09:15:12 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nOn 2022-08-02 09:15:12 +0300, Nikita Malakhov wrote:\n> Attach includes:\n> v11-0002-toaster-interface.patch - contains TOAST API with default Toaster\n> left as-is (reference implementation) and Dummy toaster as an example\n> (will be removed later as a part of refactoring?).\n> \n> v11-0003-toaster-default.patch - implements reference TOAST as Default\n> Toaster\n> via TOAST API, so Heap AM calls Toast only via API, and does not have direct\n> calls to Toast functionality.\n> \n> v11-0004-toaster-snapshot.patch - supports row versioning for TOASTed values\n> and some refactoring.\n\nI'm a bit confused by the patch numbering - why isn't there a patch 0001 and\n0005?\n\nI think 0002 needs to be split further - the relevant part isn't that it\nintroduces the \"dummy toaster\" module, it's a large change doing lots of\nthings, the addition of the contrib module is irrelevant comparatively.\n\nAs is the patches unfortunately are close to unreviewable. Lots of code gets\nmoved around in one patch, then again in the next patch, then again in the\nnext.\n\n\nUnfortunately, scanning through these patches, it seems this is a lot of\ncomplexity, with a (for me) comensurate benefit. There's a lot of more general\nimprovements to toasting and the json type that we can do, that are a lot less\ncomplex than this.\n\n\n> From 6b35d6091248e120d2361cf0a806dbfb161421cf Mon Sep 17 00:00:00 2001\n> From: Nikita Malakhov <n.malakhov@postgrespro.ru>\n> Date: Tue, 12 Apr 2022 18:37:21 +0300\n> Subject: [PATCH] Pluggable TOAST API interface with dummy_toaster contrib\n> module\n> \n> Pluggable TOAST API is introduced with implemented contrib example\n> module.\n> Pluggable TOAST API consists of 4 parts:\n> 1) SQL syntax supports manipulations with toasters - CREATE TABLE ...\n> (column type STORAGE storage_type TOASTER toaster), ALTER TABLE ALTER\n> COLUMN column SET TOASTER toaster and Toaster definition.\n> TOAST API requires earlier patch with CREATE TABLE SET STORAGE clause;\n> New column atttoaster is added to pg_attribute.\n> Toaster drop is not allowed for not to lose already toasted data;\n> 2) New VARATT_CUSTOM data structure with fixed header and variable\n> tail to store custom toasted data, with according macros set;\n\nThat's adding overhead to every toast interaction, independent of any new\ninfrastructure being used.\n\n\n\n> 4) Dummy toaster implemented via new TOAST API to be used as sample.\n> In this patch regular (default) TOAST function is left as-is and not\n> yet implemented via new API.\n> TOAST API syntax and code explanation provided in additional docs patch.\n\nI'd make this a separate commit.\n\n\n\n> @@ -445,6 +447,8 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2)\n> \t\t\treturn false;\n> \t\tif (attr1->attstorage != attr2->attstorage)\n> \t\t\treturn false;\n> +\t\tif (attr1->atttoaster != attr2->atttoaster)\n> +\t\t\treturn false;\n\nSo we're increasing pg_attribute - often already the largest catalog table in\na database.\n\nAm I just missing something, or is atttoaster not actually used in this patch?\nSo most of the contrib module added is unreachable code?\n\n\n> +/*\n> + * Toasters is very often called so syscache lookup and TsrRoutine allocation are\n> + * expensive and we need to cache them.\n\nUgh.\n\n> + * We believe what there are only a few toasters and there is high chance that\n> + * only one or only two of them are heavy used, so most used toasters should be\n> + * found as easy as possible. So, let us use a simple list, in future it could\n> + * be changed to other structure. For now it will be stored in TopCacheContext\n> + * and never destroed in backend life cycle - toasters are never deleted.\n> + */\n\nThat seems not great.\n\n\n> +typedef struct ToasterCacheEntry\n> +{\n> +\tOid\t\t\ttoasterOid;\n> +\tTsrRoutine *routine;\n> +} ToasterCacheEntry;\n> +\n> +static List\t*ToasterCache = NIL;\n> +\n> +/*\n> + * SearchTsrCache - get cached toaster routine, emits an error if toaster\n> + * doesn't exist\n> + */\n> +TsrRoutine*\n> +SearchTsrCache(Oid\ttoasterOid)\n> +{\n> +\tListCell\t\t *lc;\n> +\tToasterCacheEntry *entry;\n> +\tMemoryContext\t\tctx;\n> +\n> +\tif (list_length(ToasterCache) > 0)\n> +\t{\n> +\t\t/* fast path */\n> +\t\tentry = (ToasterCacheEntry*)linitial(ToasterCache);\n> +\t\tif (entry->toasterOid == toasterOid)\n> +\t\t\treturn entry->routine;\n> +\t}\n> +\n> +\t/* didn't find in first position */\n> +\tctx = MemoryContextSwitchTo(CacheMemoryContext);\n> +\n> +\tfor_each_from(lc, ToasterCache, 0)\n> +\t{\n> +\t\tentry = (ToasterCacheEntry*)lfirst(lc);\n> +\n> +\t\tif (entry->toasterOid == toasterOid)\n> +\t\t{\n> +\t\t\t/* remove entry from list, it will be added in a head of list below */\n> +\t\t\tforeach_delete_current(ToasterCache, lc);\n\nThat needs to move later list elements!\n\n\n> +\t\t\tgoto out;\n> +\t\t}\n> +\t}\n> +\n> +\t/* did not find entry, make a new one */\n> +\tentry = palloc(sizeof(*entry));\n> +\n> +\tentry->toasterOid = toasterOid;\n> +\tentry->routine = GetTsrRoutineByOid(toasterOid, false);\n> +\n> +out:\n> +\tToasterCache = lcons(entry, ToasterCache);\n\nThat move the whole list around! On a cache hit. Tthis would likely already be\nslower than syscache.\n\n\n> diff --git a/contrib/dummy_toaster/dummy_toaster.c b/contrib/dummy_toaster/dummy_toaster.c\n> index 0d261f6042..02f49052b7 100644\n> --- a/contrib/dummy_toaster/dummy_toaster.c\n> +++ b/contrib/dummy_toaster/dummy_toaster.c\n\nSo this is just changing around the code added in the prior commit. Why was it\nthen included before?\n\n\n> +++ b/src/include/access/generic_toaster.h\n\n> +HeapTuple\n> +heap_toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup,\n> +\t\t\t\t\t\t\tint options);\n\nThe generic toast API has heap_* in its name?\n\n\n\n\n> From 4112cd70b05dda39020d576050a98ca3cdcf2860 Mon Sep 17 00:00:00 2001\n> From: Nikita Malakhov <n.malakhov@postgrespro.ru>\n> Date: Tue, 12 Apr 2022 22:57:21 +0300\n> Subject: [PATCH] Versioned rows in TOASTed values for Default Toaster support\n> \n> Original TOAST mechanics does not support rows versioning\n> for TOASTed values.\n> Toaster snapshot - refactored generic toaster, implements\n> rows versions check in toasted values to share common parts\n> of toasted values between different versions of rows.\n\nThis misses explaining *WHY* this is changed.\n\n\n\n> diff --git a/src/backend/access/common/detoast.c b/src/backend/access/common/detoast.c\n> deleted file mode 100644\n> index aff8042166..0000000000\n> --- a/src/backend/access/common/detoast.c\n> +++ /dev/null\n\nThese patches really move things around in a largely random way.\n\n\n> -static bool toastrel_valueid_exists(Relation toastrel, Oid valueid);\n> -static bool toastid_valueid_exists(Oid toastrelid, Oid valueid);\n> -\n> +static void\n> +toast_extract_chunk_fields(Relation toastrel, TupleDesc toasttupDesc,\n> +\t\t\t\t\t\t Oid valueid, HeapTuple ttup, int32 *seqno,\n> +\t\t\t\t\t\t char **chunkdata, int *chunksize);\n> +\n> +static void\n> +toast_write_slice(Relation toastrel, Relation *toastidxs,\n> +\t\t\t\t int num_indexes, int validIndex,\n> +\t\t\t\t Oid valueid, int32 value_size, int32 slice_offset,\n> +\t\t\t\t int32 slice_length, char *slice_data,\n> +\t\t\t\t int options,\n> +\t\t\t\t void *chunk_header, int chunk_header_size,\n> +\t\t\t\t ToastChunkVisibilityCheck visibility_check,\n> +\t\t\t\t void *visibility_cxt);\n\nWhat do all these changes have to do with \"Versioned rows in TOASTed\nvalues for Default Toaster support\"?\n\n\n> +static void *\n> +toast_fetch_old_chunk(Relation toastrel, SysScanDesc toastscan, Oid valueid,\n> +\t\t\t\t\t int32 expected_chunk_seq, int32 last_old_chunk_seq,\n> +\t\t\t\t\t ToastChunkVisibilityCheck visibility_check,\n> +\t\t\t\t\t void *visibility_cxt,\n> +\t\t\t\t\t int32 *p_old_chunk_size, ItemPointer old_tid)\n> +{\n> +\tfor (;;)\n> +\t{\n> +\t\tHeapTuple\told_toasttup;\n> +\t\tchar\t *old_chunk_data;\n> +\t\tint32\t\told_chunk_seq;\n> +\t\tint32\t\told_chunk_data_size;\n> +\n> +\t\told_toasttup = systable_getnext_ordered(toastscan, ForwardScanDirection);\n> +\n> +\t\tif (old_toasttup)\n> +\t\t{\n> +\t\t\t/* Skip aborted chunks */\n> +\t\t\tif (!HeapTupleHeaderXminCommitted(old_toasttup->t_data))\n> +\t\t\t{\n> +\t\t\t\tTransactionId xmin = HeapTupleHeaderGetXmin(old_toasttup->t_data);\n> +\n> +\t\t\t\tAssert(!HeapTupleHeaderXminInvalid(old_toasttup->t_data));\n> +\n> +\t\t\t\tif (TransactionIdDidAbort(xmin))\n> +\t\t\t\t\tcontinue;\n> +\t\t\t}\n\nWhy is there visibility logic in quite random places? Also, it's not \"legal\"\nto call TransactionIdDidAbort() without having checked\nTransactionIdIsInProgress() first. And what does this this have to do with\nsnapshots - it's pretty clearly not implementing snapshot logic.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 2 Aug 2022 08:37:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nI've made a rebase according to Andres and Aleksander comments.\nRebased branch resides here:\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nI've decided to leave only the first 2 patches for review and send less\nsignificant\nchanges after the main patch will be straightened out.\nSo, here is\nv13-0001-toaster-interface.patch - main TOAST API patch, with reference\nTOAST\nmechanics left as-is.\nv13-0002-toaster-default.patch - reference TOAST re-implemented via TOAST\nAPI.\n\n\n>I'm a bit confused by the patch numbering - why isn't there a patch 0001\nand\n>0005?\nSorry for the confusion, my fault. The first patch (CREATE TABLE .. SET\nSTORAGE)\nis already committed into v15, and several of the late patches weren't\nincluded.\nI've rearranged patch numbers in this iteration.\n\n>I think 0002 needs to be split further - the relevant part isn't that it\n>introduces the \"dummy toaster\" module, it's a large change doing lots of\n>things, the addition of the contrib module is irrelevant comparatively.\n\nDone, contrib /dummy_toaster excluded from main patch and placed in branch\nas a separate commit.\n\n>As is the patches unfortunately are close to unreviewable. Lots of code\ngets\n>moved around in one patch, then again in the next patch, then again in the\n>next.\n\nSo I've decided to put here only the first one while I'm working on the\nlatter to clean\nthis up - I agree, code in latter patches needs some refactoring. Working\non it.\n\n>Unfortunately, scanning through these patches, it seems this is a lot of\n>complexity, with a (for me) comensurate benefit. There's a lot of more\ngeneral\n>improvements to toasting and the json type that we can do, that are a lot\nless\n>complex than this.\n\nWe have very significant improvements for storing large JSON and a couple of\nother TOAST improvements which make a lot of sense, but they are based on\nthis API. But in the first patch reference TOAST is left as-is, and does\nnot use\nTOAST API.\n\n>> 2) New VARATT_CUSTOM data structure with fixed header and variable\n>> tail to store custom toasted data, with according macros set;\n\n>That's adding overhead to every toast interaction, independent of any new\n>infrastructure being used.\n\nWe've performed some tests on this and haven't detected significant\noverhead,\n\n>So we're increasing pg_attribute - often already the largest catalog table\nin\n>a database.\n\nA little bit, with an OID column storing Toaster OID. We do not see any\nother way\nto keep track of Toaster used by the table's column, because it could be\nchanged\nany time by ALTER TABLE ... SET TOASTER.\n\n>Am I just missing something, or is atttoaster not actually used in this\npatch?\n>So most of the contrib module added is unreachable code?\n\nIt is necessary for Toasters implemented via TOAST API, the first patch\ndoes not\nuse it directly because reference TOAST is left unchanged. The second one\nwhich\nimplements reference TOAST via TOAST API uses it.\n\n>That seems not great.\n\nAbout Toasters deletion - we forbid dropping Toasters because if Toaster is\ndropped\nthe data TOASTed with it is lost, and as was mentioned before, we think\nthat there\nwon't be a lot of custom Toasters, likely seems to be less then a dozen.\n\n>That move the whole list around! On a cache hit. Tthis would likely\nalready be\n>slower than syscache.\n\nThank you for the remark, it is questionable approach. I've changed this in\ncurrent iteration\n(patch in attach) to keep Toaster list appended-only if Toaster was not\nfound, and leave\nToaster cache as a straight list - first element in is the head of the list.\n\nAlso, documentation on TOAST API is provided in README.toastapi in the\nfirst patch -\nI'd be grateful for comments on it.\n\nThanks for the feedback!\n\nOn Tue, Aug 2, 2022 at 6:37 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-08-02 09:15:12 +0300, Nikita Malakhov wrote:\n> > Attach includes:\n> > v11-0002-toaster-interface.patch - contains TOAST API with default\n> Toaster\n> > left as-is (reference implementation) and Dummy toaster as an example\n> > (will be removed later as a part of refactoring?).\n> >\n> > v11-0003-toaster-default.patch - implements reference TOAST as Default\n> > Toaster\n> > via TOAST API, so Heap AM calls Toast only via API, and does not have\n> direct\n> > calls to Toast functionality.\n> >\n> > v11-0004-toaster-snapshot.patch - supports row versioning for TOASTed\n> values\n> > and some refactoring.\n>\n> I'm a bit confused by the patch numbering - why isn't there a patch 0001\n> and\n> 0005?\n>\n> I think 0002 needs to be split further - the relevant part isn't that it\n> introduces the \"dummy toaster\" module, it's a large change doing lots of\n> things, the addition of the contrib module is irrelevant comparatively.\n>\n> As is the patches unfortunately are close to unreviewable. Lots of code\n> gets\n> moved around in one patch, then again in the next patch, then again in the\n> next.\n>\n>\n> Unfortunately, scanning through these patches, it seems this is a lot of\n> complexity, with a (for me) comensurate benefit. There's a lot of more\n> general\n> improvements to toasting and the json type that we can do, that are a lot\n> less\n> complex than this.\n>\n>\n> > From 6b35d6091248e120d2361cf0a806dbfb161421cf Mon Sep 17 00:00:00 2001\n> > From: Nikita Malakhov <n.malakhov@postgrespro.ru>\n> > Date: Tue, 12 Apr 2022 18:37:21 +0300\n> > Subject: [PATCH] Pluggable TOAST API interface with dummy_toaster contrib\n> > module\n> >\n> > Pluggable TOAST API is introduced with implemented contrib example\n> > module.\n> > Pluggable TOAST API consists of 4 parts:\n> > 1) SQL syntax supports manipulations with toasters - CREATE TABLE ...\n> > (column type STORAGE storage_type TOASTER toaster), ALTER TABLE ALTER\n> > COLUMN column SET TOASTER toaster and Toaster definition.\n> > TOAST API requires earlier patch with CREATE TABLE SET STORAGE clause;\n> > New column atttoaster is added to pg_attribute.\n> > Toaster drop is not allowed for not to lose already toasted data;\n> > 2) New VARATT_CUSTOM data structure with fixed header and variable\n> > tail to store custom toasted data, with according macros set;\n>\n> That's adding overhead to every toast interaction, independent of any new\n> infrastructure being used.\n>\n>\n>\n> > 4) Dummy toaster implemented via new TOAST API to be used as sample.\n> > In this patch regular (default) TOAST function is left as-is and not\n> > yet implemented via new API.\n> > TOAST API syntax and code explanation provided in additional docs patch.\n>\n> I'd make this a separate commit.\n>\n>\n>\n> > @@ -445,6 +447,8 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc\n> tupdesc2)\n> > return false;\n> > if (attr1->attstorage != attr2->attstorage)\n> > return false;\n> > + if (attr1->atttoaster != attr2->atttoaster)\n> > + return false;\n>\n> So we're increasing pg_attribute - often already the largest catalog table\n> in\n> a database.\n>\n> Am I just missing something, or is atttoaster not actually used in this\n> patch?\n> So most of the contrib module added is unreachable code?\n>\n>\n> > +/*\n> > + * Toasters is very often called so syscache lookup and TsrRoutine\n> allocation are\n> > + * expensive and we need to cache them.\n>\n> Ugh.\n>\n> > + * We believe what there are only a few toasters and there is high\n> chance that\n> > + * only one or only two of them are heavy used, so most used toasters\n> should be\n> > + * found as easy as possible. So, let us use a simple list, in future\n> it could\n> > + * be changed to other structure. For now it will be stored in\n> TopCacheContext\n> > + * and never destroed in backend life cycle - toasters are never\n> deleted.\n> > + */\n>\n> That seems not great.\n>\n>\n> > +typedef struct ToasterCacheEntry\n> > +{\n> > + Oid toasterOid;\n> > + TsrRoutine *routine;\n> > +} ToasterCacheEntry;\n> > +\n> > +static List *ToasterCache = NIL;\n> > +\n> > +/*\n> > + * SearchTsrCache - get cached toaster routine, emits an error if\n> toaster\n> > + * doesn't exist\n> > + */\n> > +TsrRoutine*\n> > +SearchTsrCache(Oid toasterOid)\n> > +{\n> > + ListCell *lc;\n> > + ToasterCacheEntry *entry;\n> > + MemoryContext ctx;\n> > +\n> > + if (list_length(ToasterCache) > 0)\n> > + {\n> > + /* fast path */\n> > + entry = (ToasterCacheEntry*)linitial(ToasterCache);\n> > + if (entry->toasterOid == toasterOid)\n> > + return entry->routine;\n> > + }\n> > +\n> > + /* didn't find in first position */\n> > + ctx = MemoryContextSwitchTo(CacheMemoryContext);\n> > +\n> > + for_each_from(lc, ToasterCache, 0)\n> > + {\n> > + entry = (ToasterCacheEntry*)lfirst(lc);\n> > +\n> > + if (entry->toasterOid == toasterOid)\n> > + {\n> > + /* remove entry from list, it will be added in a\n> head of list below */\n> > + foreach_delete_current(ToasterCache, lc);\n>\n> That needs to move later list elements!\n>\n>\n> > + goto out;\n> > + }\n> > + }\n> > +\n> > + /* did not find entry, make a new one */\n> > + entry = palloc(sizeof(*entry));\n> > +\n> > + entry->toasterOid = toasterOid;\n> > + entry->routine = GetTsrRoutineByOid(toasterOid, false);\n> > +\n> > +out:\n> > + ToasterCache = lcons(entry, ToasterCache);\n>\n> That move the whole list around! On a cache hit. Tthis would likely\n> already be\n> slower than syscache.\n>\n>\n> > diff --git a/contrib/dummy_toaster/dummy_toaster.c\n> b/contrib/dummy_toaster/dummy_toaster.c\n> > index 0d261f6042..02f49052b7 100644\n> > --- a/contrib/dummy_toaster/dummy_toaster.c\n> > +++ b/contrib/dummy_toaster/dummy_toaster.c\n>\n> So this is just changing around the code added in the prior commit. Why\n> was it\n> then included before?\n>\n>\n> > +++ b/src/include/access/generic_toaster.h\n>\n> > +HeapTuple\n> > +heap_toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple\n> oldtup,\n> > + int options);\n>\n> The generic toast API has heap_* in its name?\n>\n>\n>\n>\n> > From 4112cd70b05dda39020d576050a98ca3cdcf2860 Mon Sep 17 00:00:00 2001\n> > From: Nikita Malakhov <n.malakhov@postgrespro.ru>\n> > Date: Tue, 12 Apr 2022 22:57:21 +0300\n> > Subject: [PATCH] Versioned rows in TOASTed values for Default Toaster\n> support\n> >\n> > Original TOAST mechanics does not support rows versioning\n> > for TOASTed values.\n> > Toaster snapshot - refactored generic toaster, implements\n> > rows versions check in toasted values to share common parts\n> > of toasted values between different versions of rows.\n>\n> This misses explaining *WHY* this is changed.\n>\n>\n>\n> > diff --git a/src/backend/access/common/detoast.c\n> b/src/backend/access/common/detoast.c\n> > deleted file mode 100644\n> > index aff8042166..0000000000\n> > --- a/src/backend/access/common/detoast.c\n> > +++ /dev/null\n>\n> These patches really move things around in a largely random way.\n>\n>\n> > -static bool toastrel_valueid_exists(Relation toastrel, Oid valueid);\n> > -static bool toastid_valueid_exists(Oid toastrelid, Oid valueid);\n> > -\n> > +static void\n> > +toast_extract_chunk_fields(Relation toastrel, TupleDesc toasttupDesc,\n> > + Oid valueid, HeapTuple\n> ttup, int32 *seqno,\n> > + char **chunkdata, int\n> *chunksize);\n> > +\n> > +static void\n> > +toast_write_slice(Relation toastrel, Relation *toastidxs,\n> > + int num_indexes, int validIndex,\n> > + Oid valueid, int32 value_size, int32\n> slice_offset,\n> > + int32 slice_length, char *slice_data,\n> > + int options,\n> > + void *chunk_header, int\n> chunk_header_size,\n> > + ToastChunkVisibilityCheck\n> visibility_check,\n> > + void *visibility_cxt);\n>\n> What do all these changes have to do with \"Versioned rows in TOASTed\n> values for Default Toaster support\"?\n>\n>\n> > +static void *\n> > +toast_fetch_old_chunk(Relation toastrel, SysScanDesc toastscan, Oid\n> valueid,\n> > + int32 expected_chunk_seq, int32\n> last_old_chunk_seq,\n> > + ToastChunkVisibilityCheck\n> visibility_check,\n> > + void *visibility_cxt,\n> > + int32 *p_old_chunk_size,\n> ItemPointer old_tid)\n> > +{\n> > + for (;;)\n> > + {\n> > + HeapTuple old_toasttup;\n> > + char *old_chunk_data;\n> > + int32 old_chunk_seq;\n> > + int32 old_chunk_data_size;\n> > +\n> > + old_toasttup = systable_getnext_ordered(toastscan,\n> ForwardScanDirection);\n> > +\n> > + if (old_toasttup)\n> > + {\n> > + /* Skip aborted chunks */\n> > + if\n> (!HeapTupleHeaderXminCommitted(old_toasttup->t_data))\n> > + {\n> > + TransactionId xmin =\n> HeapTupleHeaderGetXmin(old_toasttup->t_data);\n> > +\n> > +\n> Assert(!HeapTupleHeaderXminInvalid(old_toasttup->t_data));\n> > +\n> > + if (TransactionIdDidAbort(xmin))\n> > + continue;\n> > + }\n>\n> Why is there visibility logic in quite random places? Also, it's not\n> \"legal\"\n> to call TransactionIdDidAbort() without having checked\n> TransactionIdIsInProgress() first. And what does this this have to do with\n> snapshots - it's pretty clearly not implementing snapshot logic.\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Thu, 4 Aug 2022 23:18:42 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> I've decided to leave only the first 2 patches for review and send less significant\n> changes after the main patch will be straightened out.\n> So, here is\n> v13-0001-toaster-interface.patch - main TOAST API patch, with reference TOAST\n> mechanics left as-is.\n> v13-0002-toaster-default.patch - reference TOAST re-implemented via TOAST API.\n> [...]\n\nGreat! Thank you.\n\nUnfortunately the patchset still seems to have difficulties passing\nthe CI checks (see http://cfbot.cputube.org/ ). Any chance we may see\na version rebased to the current `master` branch for the September CF?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 23 Aug 2022 11:27:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nI've rebased actual branch onto the latest master and re-created patches.\nChecked with git am,\nall applied correctly. Please check the attached patches.\nRebased branch resides here:\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nJust to remind - I've decided to leave only the first 2 patches for review\nand send less significant\nchanges after the main patch will be straightened out.\nSo, here is\nv14-0001-toaster-interface.patch - main TOAST API patch, with reference\nTOAST\nmechanics left as-is.\nv14-0002-toaster-default.patch - reference TOAST re-implemented via TOAST\nAPI.\n\nOn Tue, Aug 23, 2022 at 11:27 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > I've decided to leave only the first 2 patches for review and send less\n> significant\n> > changes after the main patch will be straightened out.\n> > So, here is\n> > v13-0001-toaster-interface.patch - main TOAST API patch, with reference\n> TOAST\n> > mechanics left as-is.\n> > v13-0002-toaster-default.patch - reference TOAST re-implemented via\n> TOAST API.\n> > [...]\n>\n> Great! Thank you.\n>\n> Unfortunately the patchset still seems to have difficulties passing\n> the CI checks (see http://cfbot.cputube.org/ ). Any chance we may see\n> a version rebased to the current `master` branch for the September CF?\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/", "msg_date": "Wed, 24 Aug 2022 12:59:23 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Wed, Aug 24, 2022 at 2:59 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> I've rebased actual branch onto the latest master and re-created patches. Checked with git am,\n> all applied correctly. Please check the attached patches.\n> Rebased branch resides here:\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nI tried installing and using the dummy_toaster that's provided with\nthe gitlink. Upgrade of that cluster fails with the following message:\n\n pg_restore: creating TOASTER \"dummy_toaster\"\n pg_restore: while PROCESSING TOC:\n pg_restore: from TOC entry 2044; 9861 16390 TOASTER dummy_toaster (no owner)\n pg_restore: error: could not execute query: ERROR: unrecognized\nor unsupported class OID: 9861\n Command was: CREATE TOASTER \"dummy_toaster\" HANDLER\n\"public\".\"dummy_toaster_handler\";\n\nI was looking through the thread for a more in-depth description of\nthe \"vtable\" concept, but I didn't see one. It looks like it's just an\narbitrary extension point, and any new additions would require surgery\non whatever function needs the particular magic provided by the\ntoaster. E.g. your bytea-append toaster extension in the gitlink,\nwhich still has to modify byteacat() in varlena.c to implement a very\nspecific optimization, and then declares its support for that\nhardcoded optimization in the extension.\n\nI'm skeptical that this would remain coherent as it grows. The patch\nclaims the vtable API is \"powerful\", which... I suppose it is, if you\nget to make arbitrary modifications to the core whenever you implement\nit. Did you already have thoughts about which operations would belong\nunder that umbrella? What would the procedure be for adding\nfunctionality to that API? What happens if a toaster wants to\nimplement two magic performance optimizations instead of one?\n\n--Jacob\n\n\n", "msg_date": "Mon, 12 Sep 2022 14:39:16 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nJacob, I agree that the bytea toaster makes a bad example due to core\nmodification,\nand actually is not a good example of an extension.\n\nThe vtable concept is intended for less invasive additional functionality -\nlike providing\ndetoast iterators in addition to standard detoast mechanics - such\nmodification requires\nonly adding iteration methods to toaster and registering them in vtable,\nwithout any\ncore modifications. I'll add this as a separate commit for generic\n(default) Toaster.\n\nIt would be more clear for complex data types like JSONB, where developers\ncould\nneed some additional functionality to work with internal representation of\ndata type,\nand its full potential is revealed in our JSONB toaster extension. The\nJSONB toaster\nis still in development but we plan to make it available soon.\n\nFor example, we can pass Toaster options with attoptions (I'm currently\nworking on it)\nand these options could, say, allow switching different optimizations in\none toaster like\nadding specific compression options or data processing directives, etc.\n\nWe doubt that there would be a lot of different custom toasters, because\nthe Toaster\nis quite a complex piece of machinery, but means for extending them would\nbe heavily\ndemanded. I have to add some more in-depth explanation of the vtable\nconcept to\nREADME and the documentation package, the dummy toaster contrib does not\ncover\nthis topic at all.\n\nOn installing dummy_toaster contrib: I've just checked it by making a patch\nfrom commit\nand applying onto my clone of master and 2 patches provided in previous\nemail without\nany errors and sll checks passed - applying with git am, configure with\ndebug, cassert,\ndepend and enable-tap-tests flags and run checks.\nPlease advice what would cause such a behavior?\n\nThank you!\n\nOn Tue, Sep 13, 2022 at 12:39 AM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Wed, Aug 24, 2022 at 2:59 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> > I've rebased actual branch onto the latest master and re-created\n> patches. Checked with git am,\n> > all applied correctly. Please check the attached patches.\n> > Rebased branch resides here:\n> > https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> I tried installing and using the dummy_toaster that's provided with\n> the gitlink. Upgrade of that cluster fails with the following message:\n>\n> pg_restore: creating TOASTER \"dummy_toaster\"\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 2044; 9861 16390 TOASTER dummy_toaster (no\n> owner)\n> pg_restore: error: could not execute query: ERROR: unrecognized\n> or unsupported class OID: 9861\n> Command was: CREATE TOASTER \"dummy_toaster\" HANDLER\n> \"public\".\"dummy_toaster_handler\";\n>\n> I was looking through the thread for a more in-depth description of\n> the \"vtable\" concept, but I didn't see one. It looks like it's just an\n> arbitrary extension point, and any new additions would require surgery\n> on whatever function needs the particular magic provided by the\n> toaster. E.g. your bytea-append toaster extension in the gitlink,\n> which still has to modify byteacat() in varlena.c to implement a very\n> specific optimization, and then declares its support for that\n> hardcoded optimization in the extension.\n>\n> I'm skeptical that this would remain coherent as it grows. The patch\n> claims the vtable API is \"powerful\", which... I suppose it is, if you\n> get to make arbitrary modifications to the core whenever you implement\n> it. Did you already have thoughts about which operations would belong\n> under that umbrella? What would the procedure be for adding\n> functionality to that API? What happens if a toaster wants to\n> implement two magic performance optimizations instead of one?\n>\n> --Jacob\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Jacob, I agree that the bytea toaster makes a bad example due to core modification,and actually is not a good example of an extension.The vtable concept is intended for less invasive additional functionality - like providingdetoast iterators in addition to standard detoast mechanics - such modification requiresonly adding iteration methods to toaster and registering them in vtable, without anycore modifications. I'll add this as a separate commit for generic (default) Toaster.It would be more clear for complex data types like JSONB, where developers couldneed some additional functionality to work with internal representation of data type,and its full potential is revealed in our JSONB toaster extension. The JSONB toasteris still in development but we plan to make it available soon.For example, we can pass Toaster options with attoptions (I'm currently working on it)and these options could, say, allow switching different optimizations in one toaster likeadding specific compression options or data processing directives, etc.We doubt that there would be a lot of different custom toasters, because the Toasteris quite a complex piece of machinery, but means for extending them would be heavilydemanded. I have to add some more in-depth explanation of the vtable concept toREADME and the documentation package, the dummy toaster contrib does not coverthis topic at all. On installing dummy_toaster contrib: I've just checked it by making a patch from commitand applying onto my clone of master and 2 patches provided in previous email withoutany errors and sll checks passed - applying with git am, configure with debug, cassert, depend and enable-tap-tests flags and run checks.Please advice what would cause such a behavior?Thank you!On Tue, Sep 13, 2022 at 12:39 AM Jacob Champion <jchampion@timescale.com> wrote:On Wed, Aug 24, 2022 at 2:59 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> I've rebased actual branch onto the latest master and re-created patches. Checked with git am,\n> all applied correctly. Please check the attached patches.\n> Rebased branch resides here:\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nI tried installing and using the dummy_toaster that's provided with\nthe gitlink. Upgrade of that cluster fails with the following message:\n\n    pg_restore: creating TOASTER \"dummy_toaster\"\n    pg_restore: while PROCESSING TOC:\n    pg_restore: from TOC entry 2044; 9861 16390 TOASTER dummy_toaster (no owner)\n    pg_restore: error: could not execute query: ERROR:  unrecognized\nor unsupported class OID: 9861\n    Command was: CREATE TOASTER \"dummy_toaster\" HANDLER\n\"public\".\"dummy_toaster_handler\";\n\nI was looking through the thread for a more in-depth description of\nthe \"vtable\" concept, but I didn't see one. It looks like it's just an\narbitrary extension point, and any new additions would require surgery\non whatever function needs the particular magic provided by the\ntoaster. E.g. your bytea-append toaster extension in the gitlink,\nwhich still has to modify byteacat() in varlena.c to implement a very\nspecific optimization, and then declares its support for that\nhardcoded optimization in the extension.\n\nI'm skeptical that this would remain coherent as it grows. The patch\nclaims the vtable API is \"powerful\", which... I suppose it is, if you\nget to make arbitrary modifications to the core whenever you implement\nit. Did you already have thoughts about which operations would belong\nunder that umbrella? What would the procedure be for adding\nfunctionality to that API? What happens if a toaster wants to\nimplement two magic performance optimizations instead of one?\n\n--Jacob\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Tue, 13 Sep 2022 09:44:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> It would be more clear for complex data types like JSONB, where developers could\n> need some additional functionality to work with internal representation of data type,\n> and its full potential is revealed in our JSONB toaster extension. The JSONB toaster\n> is still in development but we plan to make it available soon.\n\nOkay. It'll be good to have that, because as it is now it's hard to\nsee the whole picture.\n\n> On installing dummy_toaster contrib: I've just checked it by making a patch from commit\n> and applying onto my clone of master and 2 patches provided in previous email without\n> any errors and sll checks passed - applying with git am, configure with debug, cassert,\n> depend and enable-tap-tests flags and run checks.\n> Please advice what would cause such a behavior?\n\nI don't think the default pg_upgrade tests will upgrade contrib\nobjects (there are instructions in src/bin/pg_upgrade/TESTING that\ncover manual dumps, if you prefer that method). My manual steps were\nroughly\n\n =# CREATE EXTENSION dummy_toaster;\n =# CREATE TABLE test (t TEXT\n STORAGE external\n TOASTER dummy_toaster_handler);\n =# \\q\n $ initdb -D newdb\n $ pg_ctl -D olddb stop\n $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n./olddb -D ./newdb\n\n(where <install path>/bin is on the PATH, so we're using the right binaries).\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 13 Sep 2022 09:50:31 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nCfbot is not happy with previous patchset, so I'm attaching new one,\nrebased onto current master\n(15b4). Also providing patch with documentation package, and the second one\ncontains large\nREADME.toastapi file providing additional in-depth docs for developers.\n\nComments would be greatly appreciated.\n\nAlso, after checking patch sources I have a strong opinion that it needs\nsome refactoring -\nmove all files related to TOAST implementation into new folder\n/backend/access/toast where\nGeneric (default) Toaster resides.\n\nPatchset consists of:\nv15-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics;\nv15-0002-toaster-default.patch - Default TOAST re-implemented using Toaster\nAPI;\nv15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nOn Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n> wrote:\n> > It would be more clear for complex data types like JSONB, where\n> developers could\n> > need some additional functionality to work with internal representation\n> of data type,\n> > and its full potential is revealed in our JSONB toaster extension. The\n> JSONB toaster\n> > is still in development but we plan to make it available soon.\n>\n> Okay. It'll be good to have that, because as it is now it's hard to\n> see the whole picture.\n>\n> > On installing dummy_toaster contrib: I've just checked it by making a\n> patch from commit\n> > and applying onto my clone of master and 2 patches provided in previous\n> email without\n> > any errors and sll checks passed - applying with git am, configure with\n> debug, cassert,\n> > depend and enable-tap-tests flags and run checks.\n> > Please advice what would cause such a behavior?\n>\n> I don't think the default pg_upgrade tests will upgrade contrib\n> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n> cover manual dumps, if you prefer that method). My manual steps were\n> roughly\n>\n> =# CREATE EXTENSION dummy_toaster;\n> =# CREATE TABLE test (t TEXT\n> STORAGE external\n> TOASTER dummy_toaster_handler);\n> =# \\q\n> $ initdb -D newdb\n> $ pg_ctl -D olddb stop\n> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n> ./olddb -D ./newdb\n>\n> (where <install path>/bin is on the PATH, so we're using the right\n> binaries).\n>\n> Thanks,\n> --Jacob\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Fri, 23 Sep 2022 22:54:54 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nCfbot is still not happy with the patchset, so I'm attaching a rebased one,\nrebased onto the current\nmaster (from today). The third patch contains documentation package, and\nthe second one contains large\nREADME.toastapi file providing additional in-depth docs for developers.\n\nComments would be greatly appreciated.\n\nAgain, after checking patch sources I have a strong opinion that it needs\nsome refactoring -\nmove all files related to TOAST implementation into new folder\n/backend/access/toast where\nGeneric (default) Toaster resides.\n\nPatchset consists of:\nv16-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics;\nv16-0002-toaster-default.patch - Default TOAST re-implemented using Toaster\nAPI;\nv16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nOn Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> Cfbot is not happy with previous patchset, so I'm attaching new one,\n> rebased onto current master\n> (15b4). Also providing patch with documentation package, and the second\n> one contains large\n> README.toastapi file providing additional in-depth docs for developers.\n>\n> Comments would be greatly appreciated.\n>\n> Also, after checking patch sources I have a strong opinion that it needs\n> some refactoring -\n> move all files related to TOAST implementation into new folder\n> /backend/access/toast where\n> Generic (default) Toaster resides.\n>\n> Patchset consists of:\n> v15-0001-toaster-interface.patch - Pluggable TOAST API interface along\n> with reference TOAST mechanics;\n> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n> Toaster API;\n> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com>\n> wrote:\n>\n>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>> wrote:\n>> > It would be more clear for complex data types like JSONB, where\n>> developers could\n>> > need some additional functionality to work with internal representation\n>> of data type,\n>> > and its full potential is revealed in our JSONB toaster extension. The\n>> JSONB toaster\n>> > is still in development but we plan to make it available soon.\n>>\n>> Okay. It'll be good to have that, because as it is now it's hard to\n>> see the whole picture.\n>>\n>> > On installing dummy_toaster contrib: I've just checked it by making a\n>> patch from commit\n>> > and applying onto my clone of master and 2 patches provided in previous\n>> email without\n>> > any errors and sll checks passed - applying with git am, configure with\n>> debug, cassert,\n>> > depend and enable-tap-tests flags and run checks.\n>> > Please advice what would cause such a behavior?\n>>\n>> I don't think the default pg_upgrade tests will upgrade contrib\n>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>> cover manual dumps, if you prefer that method). My manual steps were\n>> roughly\n>>\n>> =# CREATE EXTENSION dummy_toaster;\n>> =# CREATE TABLE test (t TEXT\n>> STORAGE external\n>> TOASTER dummy_toaster_handler);\n>> =# \\q\n>> $ initdb -D newdb\n>> $ pg_ctl -D olddb stop\n>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>> ./olddb -D ./newdb\n>>\n>> (where <install path>/bin is on the PATH, so we're using the right\n>> binaries).\n>>\n>> Thanks,\n>> --Jacob\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Sat, 24 Sep 2022 15:50:24 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nLast patchset has an invalid patch file - v16-0003-toaster-docs.patch.\nHere's corrected patchset,\nsorry for the noise.\n\nOn Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> Cfbot is still not happy with the patchset, so I'm attaching a rebased\n> one, rebased onto the current\n> master (from today). The third patch contains documentation package, and\n> the second one contains large\n> README.toastapi file providing additional in-depth docs for developers.\n>\n> Comments would be greatly appreciated.\n>\n> Again, after checking patch sources I have a strong opinion that it needs\n> some refactoring -\n> move all files related to TOAST implementation into new folder\n> /backend/access/toast where\n> Generic (default) Toaster resides.\n>\n> Patchset consists of:\n> v16-0001-toaster-interface.patch - Pluggable TOAST API interface along\n> with reference TOAST mechanics;\n> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n> Toaster API;\n> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> Actual GitHub branch resides at\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n> wrote:\n>\n>> Hi hackers!\n>>\n>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>> rebased onto current master\n>> (15b4). Also providing patch with documentation package, and the second\n>> one contains large\n>> README.toastapi file providing additional in-depth docs for developers.\n>>\n>> Comments would be greatly appreciated.\n>>\n>> Also, after checking patch sources I have a strong opinion that it needs\n>> some refactoring -\n>> move all files related to TOAST implementation into new folder\n>> /backend/access/toast where\n>> Generic (default) Toaster resides.\n>>\n>> Patchset consists of:\n>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>> with reference TOAST mechanics;\n>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>> Toaster API;\n>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>\n>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com>\n>> wrote:\n>>\n>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>> wrote:\n>>> > It would be more clear for complex data types like JSONB, where\n>>> developers could\n>>> > need some additional functionality to work with internal\n>>> representation of data type,\n>>> > and its full potential is revealed in our JSONB toaster extension. The\n>>> JSONB toaster\n>>> > is still in development but we plan to make it available soon.\n>>>\n>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>> see the whole picture.\n>>>\n>>> > On installing dummy_toaster contrib: I've just checked it by making a\n>>> patch from commit\n>>> > and applying onto my clone of master and 2 patches provided in\n>>> previous email without\n>>> > any errors and sll checks passed - applying with git am, configure\n>>> with debug, cassert,\n>>> > depend and enable-tap-tests flags and run checks.\n>>> > Please advice what would cause such a behavior?\n>>>\n>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>> cover manual dumps, if you prefer that method). My manual steps were\n>>> roughly\n>>>\n>>> =# CREATE EXTENSION dummy_toaster;\n>>> =# CREATE TABLE test (t TEXT\n>>> STORAGE external\n>>> TOASTER dummy_toaster_handler);\n>>> =# \\q\n>>> $ initdb -D newdb\n>>> $ pg_ctl -D olddb stop\n>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>> ./olddb -D ./newdb\n>>>\n>>> (where <install path>/bin is on the PATH, so we're using the right\n>>> binaries).\n>>>\n>>> Thanks,\n>>> --Jacob\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Sun, 25 Sep 2022 01:41:34 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\nMeson build for the patchset failed, meson build files attached and\nREADME/Doc package\nreworked with more detailed explanation of virtual function table along\nwith other corrections.\n\nOn Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n> Last patchset has an invalid patch file - v16-0003-toaster-docs.patch.\n> Here's corrected patchset,\n> sorry for the noise.\n>\n> On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi hackers!\n>>\n>> Cfbot is still not happy with the patchset, so I'm attaching a rebased\n>> one, rebased onto the current\n>> master (from today). The third patch contains documentation package, and\n>> the second one contains large\n>> README.toastapi file providing additional in-depth docs for developers.\n>>\n>> Comments would be greatly appreciated.\n>>\n>> Again, after checking patch sources I have a strong opinion that it needs\n>> some refactoring -\n>> move all files related to TOAST implementation into new folder\n>> /backend/access/toast where\n>> Generic (default) Toaster resides.\n>>\n>> Patchset consists of:\n>> v16-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>> with reference TOAST mechanics;\n>> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n>> Toaster API;\n>> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>\n>> Actual GitHub branch resides at\n>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>\n>> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n>> wrote:\n>>\n>>> Hi hackers!\n>>>\n>>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>>> rebased onto current master\n>>> (15b4). Also providing patch with documentation package, and the second\n>>> one contains large\n>>> README.toastapi file providing additional in-depth docs for developers.\n>>>\n>>> Comments would be greatly appreciated.\n>>>\n>>> Also, after checking patch sources I have a strong opinion that it needs\n>>> some refactoring -\n>>> move all files related to TOAST implementation into new folder\n>>> /backend/access/toast where\n>>> Generic (default) Toaster resides.\n>>>\n>>> Patchset consists of:\n>>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>> with reference TOAST mechanics;\n>>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>>> Toaster API;\n>>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>\n>>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com>\n>>> wrote:\n>>>\n>>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>> wrote:\n>>>> > It would be more clear for complex data types like JSONB, where\n>>>> developers could\n>>>> > need some additional functionality to work with internal\n>>>> representation of data type,\n>>>> > and its full potential is revealed in our JSONB toaster extension.\n>>>> The JSONB toaster\n>>>> > is still in development but we plan to make it available soon.\n>>>>\n>>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>>> see the whole picture.\n>>>>\n>>>> > On installing dummy_toaster contrib: I've just checked it by making a\n>>>> patch from commit\n>>>> > and applying onto my clone of master and 2 patches provided in\n>>>> previous email without\n>>>> > any errors and sll checks passed - applying with git am, configure\n>>>> with debug, cassert,\n>>>> > depend and enable-tap-tests flags and run checks.\n>>>> > Please advice what would cause such a behavior?\n>>>>\n>>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>>> cover manual dumps, if you prefer that method). My manual steps were\n>>>> roughly\n>>>>\n>>>> =# CREATE EXTENSION dummy_toaster;\n>>>> =# CREATE TABLE test (t TEXT\n>>>> STORAGE external\n>>>> TOASTER dummy_toaster_handler);\n>>>> =# \\q\n>>>> $ initdb -D newdb\n>>>> $ pg_ctl -D olddb stop\n>>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>>> ./olddb -D ./newdb\n>>>>\n>>>> (where <install path>/bin is on the PATH, so we're using the right\n>>>> binaries).\n>>>>\n>>>> Thanks,\n>>>> --Jacob\n>>>>\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Nikita Malakhov\n>>> Postgres Professional\n>>> https://postgrespro.ru/\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 27 Sep 2022 00:26:32 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nCfbot failed in meson build with previous patchsets, so I've rebased them\nonto the latest master and added necessary meson build info.\n\nPatchset consists of:\nv19-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics - new API is introduced but\nreference TOAST is still unchanged;\nv19-0002-toaster-default.patch - Default TOAST re-implemented using Toaster API\n- reference TOAST is re-implemented via new API;\nv19-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nOn Tue, Sep 27, 2022 at 12:26 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n> Meson build for the patchset failed, meson build files attached and\n> README/Doc package\n> reworked with more detailed explanation of virtual function table along\n> with other corrections.\n>\n> On Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi hackers!\n>> Last patchset has an invalid patch file - v16-0003-toaster-docs.patch.\n>> Here's corrected patchset,\n>> sorry for the noise.\n>>\n>> On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com>\n>> wrote:\n>>\n>>> Hi hackers!\n>>>\n>>> Cfbot is still not happy with the patchset, so I'm attaching a rebased\n>>> one, rebased onto the current\n>>> master (from today). The third patch contains documentation package, and\n>>> the second one contains large\n>>> README.toastapi file providing additional in-depth docs for developers.\n>>>\n>>> Comments would be greatly appreciated.\n>>>\n>>> Again, after checking patch sources I have a strong opinion that it\n>>> needs some refactoring -\n>>> move all files related to TOAST implementation into new folder\n>>> /backend/access/toast where\n>>> Generic (default) Toaster resides.\n>>>\n>>> Patchset consists of:\n>>> v16-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>> with reference TOAST mechanics;\n>>> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n>>> Toaster API;\n>>> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>\n>>> Actual GitHub branch resides at\n>>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>>\n>>> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi hackers!\n>>>>\n>>>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>>>> rebased onto current master\n>>>> (15b4). Also providing patch with documentation package, and the second\n>>>> one contains large\n>>>> README.toastapi file providing additional in-depth docs for developers.\n>>>>\n>>>> Comments would be greatly appreciated.\n>>>>\n>>>> Also, after checking patch sources I have a strong opinion that it\n>>>> needs some refactoring -\n>>>> move all files related to TOAST implementation into new folder\n>>>> /backend/access/toast where\n>>>> Generic (default) Toaster resides.\n>>>>\n>>>> Patchset consists of:\n>>>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>>> with reference TOAST mechanics;\n>>>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>> Toaster API;\n>>>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>>\n>>>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com>\n>>>> wrote:\n>>>>\n>>>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>> wrote:\n>>>>> > It would be more clear for complex data types like JSONB, where\n>>>>> developers could\n>>>>> > need some additional functionality to work with internal\n>>>>> representation of data type,\n>>>>> > and its full potential is revealed in our JSONB toaster extension.\n>>>>> The JSONB toaster\n>>>>> > is still in development but we plan to make it available soon.\n>>>>>\n>>>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>>>> see the whole picture.\n>>>>>\n>>>>> > On installing dummy_toaster contrib: I've just checked it by making\n>>>>> a patch from commit\n>>>>> > and applying onto my clone of master and 2 patches provided in\n>>>>> previous email without\n>>>>> > any errors and sll checks passed - applying with git am, configure\n>>>>> with debug, cassert,\n>>>>> > depend and enable-tap-tests flags and run checks.\n>>>>> > Please advice what would cause such a behavior?\n>>>>>\n>>>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>>>> cover manual dumps, if you prefer that method). My manual steps were\n>>>>> roughly\n>>>>>\n>>>>> =# CREATE EXTENSION dummy_toaster;\n>>>>> =# CREATE TABLE test (t TEXT\n>>>>> STORAGE external\n>>>>> TOASTER dummy_toaster_handler);\n>>>>> =# \\q\n>>>>> $ initdb -D newdb\n>>>>> $ pg_ctl -D olddb stop\n>>>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>>>> ./olddb -D ./newdb\n>>>>>\n>>>>> (where <install path>/bin is on the PATH, so we're using the right\n>>>>> binaries).\n>>>>>\n>>>>> Thanks,\n>>>>> --Jacob\n>>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Regards,\n>>>> Nikita Malakhov\n>>>> Postgres Professional\n>>>> https://postgrespro.ru/\n>>>>\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Nikita Malakhov\n>>> Postgres Professional\n>>> https://postgrespro.ru/\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 4 Oct 2022 01:02:19 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nNow cfbot is happy, but there were warnings due to recent changes in\nPointerGetDatum function, so here's corrected patchset.\n\nPatchset consists of:\nv20-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics - new API is introduced but\nreference TOAST is still unchanged;\nv20-0002-toaster-default.patch - Default TOAST re-implemented\nusing Toaster API - reference TOAST is re-implemented via new API;\nv20-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nOn Tue, Oct 4, 2022 at 1:02 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n> Cfbot failed in meson build with previous patchsets, so I've rebased them\n> onto the latest master and added necessary meson build info.\n>\n> Patchset consists of:\n> v19-0001-toaster-interface.patch - Pluggable TOAST API interface along\n> with reference TOAST mechanics - new API is introduced but\n> reference TOAST is still unchanged;\n> v19-0002-toaster-default.patch - Default TOAST re-implemented using\n> Toaster API - reference TOAST is re-implemented via new API;\n> v19-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> Actual GitHub branch resides at\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> On Tue, Sep 27, 2022 at 12:26 AM Nikita Malakhov <hukutoc@gmail.com>\n> wrote:\n>\n>> Hi,\n>> Meson build for the patchset failed, meson build files attached and\n>> README/Doc package\n>> reworked with more detailed explanation of virtual function table along\n>> with other corrections.\n>>\n>> On Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com>\n>> wrote:\n>>\n>>> Hi hackers!\n>>> Last patchset has an invalid patch file - v16-0003-toaster-docs.patch.\n>>> Here's corrected patchset,\n>>> sorry for the noise.\n>>>\n>>> On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi hackers!\n>>>>\n>>>> Cfbot is still not happy with the patchset, so I'm attaching a rebased\n>>>> one, rebased onto the current\n>>>> master (from today). The third patch contains documentation package,\n>>>> and the second one contains large\n>>>> README.toastapi file providing additional in-depth docs for developers.\n>>>>\n>>>> Comments would be greatly appreciated.\n>>>>\n>>>> Again, after checking patch sources I have a strong opinion that it\n>>>> needs some refactoring -\n>>>> move all files related to TOAST implementation into new folder\n>>>> /backend/access/toast where\n>>>> Generic (default) Toaster resides.\n>>>>\n>>>> Patchset consists of:\n>>>> v16-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>>> with reference TOAST mechanics;\n>>>> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>> Toaster API;\n>>>> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>>\n>>>> Actual GitHub branch resides at\n>>>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>>>\n>>>> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> Hi hackers!\n>>>>>\n>>>>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>>>>> rebased onto current master\n>>>>> (15b4). Also providing patch with documentation package, and the\n>>>>> second one contains large\n>>>>> README.toastapi file providing additional in-depth docs for developers.\n>>>>>\n>>>>> Comments would be greatly appreciated.\n>>>>>\n>>>>> Also, after checking patch sources I have a strong opinion that it\n>>>>> needs some refactoring -\n>>>>> move all files related to TOAST implementation into new folder\n>>>>> /backend/access/toast where\n>>>>> Generic (default) Toaster resides.\n>>>>>\n>>>>> Patchset consists of:\n>>>>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>>>> with reference TOAST mechanics;\n>>>>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>>> Toaster API;\n>>>>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>>>\n>>>>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <\n>>>>> jchampion@timescale.com> wrote:\n>>>>>\n>>>>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>>> wrote:\n>>>>>> > It would be more clear for complex data types like JSONB, where\n>>>>>> developers could\n>>>>>> > need some additional functionality to work with internal\n>>>>>> representation of data type,\n>>>>>> > and its full potential is revealed in our JSONB toaster extension.\n>>>>>> The JSONB toaster\n>>>>>> > is still in development but we plan to make it available soon.\n>>>>>>\n>>>>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>>>>> see the whole picture.\n>>>>>>\n>>>>>> > On installing dummy_toaster contrib: I've just checked it by making\n>>>>>> a patch from commit\n>>>>>> > and applying onto my clone of master and 2 patches provided in\n>>>>>> previous email without\n>>>>>> > any errors and sll checks passed - applying with git am, configure\n>>>>>> with debug, cassert,\n>>>>>> > depend and enable-tap-tests flags and run checks.\n>>>>>> > Please advice what would cause such a behavior?\n>>>>>>\n>>>>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>>>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>>>>> cover manual dumps, if you prefer that method). My manual steps were\n>>>>>> roughly\n>>>>>>\n>>>>>> =# CREATE EXTENSION dummy_toaster;\n>>>>>> =# CREATE TABLE test (t TEXT\n>>>>>> STORAGE external\n>>>>>> TOASTER dummy_toaster_handler);\n>>>>>> =# \\q\n>>>>>> $ initdb -D newdb\n>>>>>> $ pg_ctl -D olddb stop\n>>>>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>>>>> ./olddb -D ./newdb\n>>>>>>\n>>>>>> (where <install path>/bin is on the PATH, so we're using the right\n>>>>>> binaries).\n>>>>>>\n>>>>>> Thanks,\n>>>>>> --Jacob\n>>>>>>\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Regards,\n>>>>> Nikita Malakhov\n>>>>> Postgres Professional\n>>>>> https://postgrespro.ru/\n>>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Regards,\n>>>> Nikita Malakhov\n>>>> Postgres Professional\n>>>> https://postgrespro.ru/\n>>>>\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Nikita Malakhov\n>>> Postgres Professional\n>>> https://postgrespro.ru/\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Now cfbot is happy, but there were warnings due to recent changes in PointerGetDatum function, so here's corrected patchset.Patchset consists of:v20-0001-toaster-interface.patch - Pluggable TOAST API interface along with reference TOAST mechanics - new API is introduced butreference TOAST is still unchanged;v20-0002-toaster-default.patch - Default TOAST re-implemented using Toaster API - reference TOAST is re-implemented via new API;v20-0003-toaster-docs.patch - Pluggable TOAST API documentation packageActual GitHub branch resides athttps://github.com/postgrespro/postgres/tree/toasterapi_cleanOn Tue, Oct 4, 2022 at 1:02 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!Cfbot failed in meson build with previous patchsets, so I've rebased them onto the latest master and added necessary meson build info.Patchset consists of:v19-0001-toaster-interface.patch - Pluggable TOAST API interface along with reference TOAST mechanics - new API is introduced butreference TOAST is still unchanged;v19-0002-toaster-default.patch - Default TOAST re-implemented using Toaster API - reference TOAST is re-implemented via new API;v19-0003-toaster-docs.patch - Pluggable TOAST API documentation packageActual GitHub branch resides athttps://github.com/postgrespro/postgres/tree/toasterapi_cleanOn Tue, Sep 27, 2022 at 12:26 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi,Meson build for the patchset failed, meson build files attached and README/Doc packagereworked with more detailed explanation of virtual function table along with other corrections.On Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!Last patchset has an invalid patch file - v16-0003-toaster-docs.patch. Here's corrected patchset,sorry for the noise.On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!Cfbot is still not happy with the patchset, so I'm attaching a rebased one, rebased onto the currentmaster (from today). The third patch contains documentation package, and the second one contains large README.toastapi file providing additional in-depth docs for developers.Comments would be greatly appreciated.Again, after checking patch sources I have a strong opinion that it needs some refactoring - move all files related to TOAST implementation into new folder /backend/access/toast whereGeneric (default) Toaster resides.Patchset consists of:v16-0001-toaster-interface.patch - Pluggable TOAST API interface along with reference TOAST mechanics;v16-0002-toaster-default.patch - Default TOAST re-implemented using Toaster API;v16-0003-toaster-docs.patch - Pluggable TOAST API documentation packageActual GitHub branch resides athttps://github.com/postgrespro/postgres/tree/toasterapi_cleanOn Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!Cfbot is not happy with previous patchset, so I'm attaching new one, rebased onto current master(15b4). Also providing patch with documentation package, and the second one contains large README.toastapi file providing additional in-depth docs for developers.Comments would be greatly appreciated.Also, after checking patch sources I have a strong opinion that it needs some refactoring - move all files related to TOAST implementation into new folder /backend/access/toast whereGeneric (default) Toaster resides.Patchset consists of:v15-0001-toaster-interface.patch - Pluggable TOAST API interface along with reference TOAST mechanics;v15-0002-toaster-default.patch - Default TOAST re-implemented using Toaster API;v15-0003-toaster-docs.patch - Pluggable TOAST API documentation packageOn Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <jchampion@timescale.com> wrote:On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> It would be more clear for complex data types like JSONB, where developers could\n> need some additional functionality to work with internal representation of data type,\n> and its full potential is revealed in our JSONB toaster extension. The JSONB toaster\n> is still in development but we plan to make it available soon.\n\nOkay. It'll be good to have that, because as it is now it's hard to\nsee the whole picture.\n\n> On installing dummy_toaster contrib: I've just checked it by making a patch from commit\n> and applying onto my clone of master and 2 patches provided in previous email without\n> any errors and sll checks passed - applying with git am, configure with debug, cassert,\n> depend and enable-tap-tests flags and run checks.\n> Please advice what would cause such a behavior?\n\nI don't think the default pg_upgrade tests will upgrade contrib\nobjects (there are instructions in src/bin/pg_upgrade/TESTING that\ncover manual dumps, if you prefer that method). My manual steps were\nroughly\n\n    =# CREATE EXTENSION dummy_toaster;\n    =# CREATE TABLE test (t TEXT\n            STORAGE external\n            TOASTER dummy_toaster_handler);\n    =# \\q\n    $ initdb -D newdb\n    $ pg_ctl -D olddb stop\n    $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n./olddb -D ./newdb\n\n(where <install path>/bin is on the PATH, so we're using the right binaries).\n\nThanks,\n--Jacob\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Tue, 4 Oct 2022 13:45:38 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\nNow cfbot is happy, but there were warnings due to recent changes in\nPointerGetDatum function, so here's corrected patchset.\nSorry, forgot to attach patch files. My fault.\n\nPatchset consists of:\nv20-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics - new API is introduced but\nreference TOAST is still unchanged;\nv20-0002-toaster-default.patch - Default TOAST re-implemented\nusing Toaster API - reference TOAST is re-implemented via new API;\nv20-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nOn Tue, Oct 4, 2022 at 1:45 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n> Now cfbot is happy, but there were warnings due to recent changes in\n> PointerGetDatum function, so here's corrected patchset.\n>\n> Patchset consists of:\n> v20-0001-toaster-interface.patch - Pluggable TOAST API interface along\n> with reference TOAST mechanics - new API is introduced but\n> reference TOAST is still unchanged;\n> v20-0002-toaster-default.patch - Default TOAST re-implemented\n> using Toaster API - reference TOAST is re-implemented via new API;\n> v20-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> Actual GitHub branch resides at\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> On Tue, Oct 4, 2022 at 1:02 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi hackers!\n>> Cfbot failed in meson build with previous patchsets, so I've rebased them\n>> onto the latest master and added necessary meson build info.\n>>\n>> Patchset consists of:\n>> v19-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>> with reference TOAST mechanics - new API is introduced but\n>> reference TOAST is still unchanged;\n>> v19-0002-toaster-default.patch - Default TOAST re-implemented using\n>> Toaster API - reference TOAST is re-implemented via new API;\n>> v19-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>\n>> Actual GitHub branch resides at\n>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>\n>> On Tue, Sep 27, 2022 at 12:26 AM Nikita Malakhov <hukutoc@gmail.com>\n>> wrote:\n>>\n>>> Hi,\n>>> Meson build for the patchset failed, meson build files attached and\n>>> README/Doc package\n>>> reworked with more detailed explanation of virtual function table along\n>>> with other corrections.\n>>>\n>>> On Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi hackers!\n>>>> Last patchset has an invalid patch file - v16-0003-toaster-docs.patch.\n>>>> Here's corrected patchset,\n>>>> sorry for the noise.\n>>>>\n>>>> On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> Hi hackers!\n>>>>>\n>>>>> Cfbot is still not happy with the patchset, so I'm attaching a rebased\n>>>>> one, rebased onto the current\n>>>>> master (from today). The third patch contains documentation package,\n>>>>> and the second one contains large\n>>>>> README.toastapi file providing additional in-depth docs for developers.\n>>>>>\n>>>>> Comments would be greatly appreciated.\n>>>>>\n>>>>> Again, after checking patch sources I have a strong opinion that it\n>>>>> needs some refactoring -\n>>>>> move all files related to TOAST implementation into new folder\n>>>>> /backend/access/toast where\n>>>>> Generic (default) Toaster resides.\n>>>>>\n>>>>> Patchset consists of:\n>>>>> v16-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>>>> with reference TOAST mechanics;\n>>>>> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>>> Toaster API;\n>>>>> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>>>\n>>>>> Actual GitHub branch resides at\n>>>>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>>>>\n>>>>> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>> wrote:\n>>>>>\n>>>>>> Hi hackers!\n>>>>>>\n>>>>>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>>>>>> rebased onto current master\n>>>>>> (15b4). Also providing patch with documentation package, and the\n>>>>>> second one contains large\n>>>>>> README.toastapi file providing additional in-depth docs for\n>>>>>> developers.\n>>>>>>\n>>>>>> Comments would be greatly appreciated.\n>>>>>>\n>>>>>> Also, after checking patch sources I have a strong opinion that it\n>>>>>> needs some refactoring -\n>>>>>> move all files related to TOAST implementation into new folder\n>>>>>> /backend/access/toast where\n>>>>>> Generic (default) Toaster resides.\n>>>>>>\n>>>>>> Patchset consists of:\n>>>>>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface\n>>>>>> along with reference TOAST mechanics;\n>>>>>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>>>> Toaster API;\n>>>>>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation\n>>>>>> package\n>>>>>>\n>>>>>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <\n>>>>>> jchampion@timescale.com> wrote:\n>>>>>>\n>>>>>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>>>> wrote:\n>>>>>>> > It would be more clear for complex data types like JSONB, where\n>>>>>>> developers could\n>>>>>>> > need some additional functionality to work with internal\n>>>>>>> representation of data type,\n>>>>>>> > and its full potential is revealed in our JSONB toaster extension.\n>>>>>>> The JSONB toaster\n>>>>>>> > is still in development but we plan to make it available soon.\n>>>>>>>\n>>>>>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>>>>>> see the whole picture.\n>>>>>>>\n>>>>>>> > On installing dummy_toaster contrib: I've just checked it by\n>>>>>>> making a patch from commit\n>>>>>>> > and applying onto my clone of master and 2 patches provided in\n>>>>>>> previous email without\n>>>>>>> > any errors and sll checks passed - applying with git am, configure\n>>>>>>> with debug, cassert,\n>>>>>>> > depend and enable-tap-tests flags and run checks.\n>>>>>>> > Please advice what would cause such a behavior?\n>>>>>>>\n>>>>>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>>>>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>>>>>> cover manual dumps, if you prefer that method). My manual steps were\n>>>>>>> roughly\n>>>>>>>\n>>>>>>> =# CREATE EXTENSION dummy_toaster;\n>>>>>>> =# CREATE TABLE test (t TEXT\n>>>>>>> STORAGE external\n>>>>>>> TOASTER dummy_toaster_handler);\n>>>>>>> =# \\q\n>>>>>>> $ initdb -D newdb\n>>>>>>> $ pg_ctl -D olddb stop\n>>>>>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>>>>>> ./olddb -D ./newdb\n>>>>>>>\n>>>>>>> (where <install path>/bin is on the PATH, so we're using the right\n>>>>>>> binaries).\n>>>>>>>\n>>>>>>> Thanks,\n>>>>>>> --Jacob\n>>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> --\n>>>>>> Regards,\n>>>>>> Nikita Malakhov\n>>>>>> Postgres Professional\n>>>>>> https://postgrespro.ru/\n>>>>>>\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Regards,\n>>>>> Nikita Malakhov\n>>>>> Postgres Professional\n>>>>> https://postgrespro.ru/\n>>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Regards,\n>>>> Nikita Malakhov\n>>>> Postgres Professional\n>>>> https://postgrespro.ru/\n>>>>\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Nikita Malakhov\n>>> Postgres Professional\n>>> https://postgrespro.ru/\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 4 Oct 2022 13:46:31 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\ncfbot is unhappy again, with documentation package.Here's\ncorrected patchset.\n\nPatchset consists of:\nv21-0001-toaster-interface.patch - Pluggable TOAST API interface along with\nreference TOAST mechanics - new API is introduced but\nreference TOAST is still unchanged;\nv21-0002-toaster-default.patch - Default TOAST re-implemented\nusing Toaster API - reference TOAST is re-implemented via new API;\nv21-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\n\nOn Tue, Oct 4, 2022 at 1:46 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n> Now cfbot is happy, but there were warnings due to recent changes in\n> PointerGetDatum function, so here's corrected patchset.\n> Sorry, forgot to attach patch files. My fault.\n>\n> Patchset consists of:\n> v20-0001-toaster-interface.patch - Pluggable TOAST API interface along\n> with reference TOAST mechanics - new API is introduced but\n> reference TOAST is still unchanged;\n> v20-0002-toaster-default.patch - Default TOAST re-implemented\n> using Toaster API - reference TOAST is re-implemented via new API;\n> v20-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> Actual GitHub branch resides at\n> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>\n> On Tue, Oct 4, 2022 at 1:45 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi hackers!\n>> Now cfbot is happy, but there were warnings due to recent changes in\n>> PointerGetDatum function, so here's corrected patchset.\n>>\n>> Patchset consists of:\n>> v20-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>> with reference TOAST mechanics - new API is introduced but\n>> reference TOAST is still unchanged;\n>> v20-0002-toaster-default.patch - Default TOAST re-implemented\n>> using Toaster API - reference TOAST is re-implemented via new API;\n>> v20-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>\n>> Actual GitHub branch resides at\n>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>\n>> On Tue, Oct 4, 2022 at 1:02 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>>\n>>> Hi hackers!\n>>> Cfbot failed in meson build with previous patchsets, so I've rebased\n>>> them onto the latest master and added necessary meson build info.\n>>>\n>>> Patchset consists of:\n>>> v19-0001-toaster-interface.patch - Pluggable TOAST API interface along\n>>> with reference TOAST mechanics - new API is introduced but\n>>> reference TOAST is still unchanged;\n>>> v19-0002-toaster-default.patch - Default TOAST re-implemented using\n>>> Toaster API - reference TOAST is re-implemented via new API;\n>>> v19-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>>>\n>>> Actual GitHub branch resides at\n>>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>>\n>>> On Tue, Sep 27, 2022 at 12:26 AM Nikita Malakhov <hukutoc@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi,\n>>>> Meson build for the patchset failed, meson build files attached and\n>>>> README/Doc package\n>>>> reworked with more detailed explanation of virtual function table along\n>>>> with other corrections.\n>>>>\n>>>> On Sun, Sep 25, 2022 at 1:41 AM Nikita Malakhov <hukutoc@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> Hi hackers!\n>>>>> Last patchset has an invalid patch file - v16-0003-toaster-docs.patch.\n>>>>> Here's corrected patchset,\n>>>>> sorry for the noise.\n>>>>>\n>>>>> On Sat, Sep 24, 2022 at 3:50 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>> wrote:\n>>>>>\n>>>>>> Hi hackers!\n>>>>>>\n>>>>>> Cfbot is still not happy with the patchset, so I'm attaching a\n>>>>>> rebased one, rebased onto the current\n>>>>>> master (from today). The third patch contains documentation package,\n>>>>>> and the second one contains large\n>>>>>> README.toastapi file providing additional in-depth docs for\n>>>>>> developers.\n>>>>>>\n>>>>>> Comments would be greatly appreciated.\n>>>>>>\n>>>>>> Again, after checking patch sources I have a strong opinion that it\n>>>>>> needs some refactoring -\n>>>>>> move all files related to TOAST implementation into new folder\n>>>>>> /backend/access/toast where\n>>>>>> Generic (default) Toaster resides.\n>>>>>>\n>>>>>> Patchset consists of:\n>>>>>> v16-0001-toaster-interface.patch - Pluggable TOAST API interface\n>>>>>> along with reference TOAST mechanics;\n>>>>>> v16-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>>>> Toaster API;\n>>>>>> v16-0003-toaster-docs.patch - Pluggable TOAST API documentation\n>>>>>> package\n>>>>>>\n>>>>>> Actual GitHub branch resides at\n>>>>>> https://github.com/postgrespro/postgres/tree/toasterapi_clean\n>>>>>>\n>>>>>> On Fri, Sep 23, 2022 at 10:54 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Hi hackers!\n>>>>>>>\n>>>>>>> Cfbot is not happy with previous patchset, so I'm attaching new one,\n>>>>>>> rebased onto current master\n>>>>>>> (15b4). Also providing patch with documentation package, and the\n>>>>>>> second one contains large\n>>>>>>> README.toastapi file providing additional in-depth docs for\n>>>>>>> developers.\n>>>>>>>\n>>>>>>> Comments would be greatly appreciated.\n>>>>>>>\n>>>>>>> Also, after checking patch sources I have a strong opinion that it\n>>>>>>> needs some refactoring -\n>>>>>>> move all files related to TOAST implementation into new folder\n>>>>>>> /backend/access/toast where\n>>>>>>> Generic (default) Toaster resides.\n>>>>>>>\n>>>>>>> Patchset consists of:\n>>>>>>> v15-0001-toaster-interface.patch - Pluggable TOAST API interface\n>>>>>>> along with reference TOAST mechanics;\n>>>>>>> v15-0002-toaster-default.patch - Default TOAST re-implemented using\n>>>>>>> Toaster API;\n>>>>>>> v15-0003-toaster-docs.patch - Pluggable TOAST API documentation\n>>>>>>> package\n>>>>>>>\n>>>>>>> On Tue, Sep 13, 2022 at 7:50 PM Jacob Champion <\n>>>>>>> jchampion@timescale.com> wrote:\n>>>>>>>\n>>>>>>>> On Mon, Sep 12, 2022 at 11:45 PM Nikita Malakhov <hukutoc@gmail.com>\n>>>>>>>> wrote:\n>>>>>>>> > It would be more clear for complex data types like JSONB, where\n>>>>>>>> developers could\n>>>>>>>> > need some additional functionality to work with internal\n>>>>>>>> representation of data type,\n>>>>>>>> > and its full potential is revealed in our JSONB toaster\n>>>>>>>> extension. The JSONB toaster\n>>>>>>>> > is still in development but we plan to make it available soon.\n>>>>>>>>\n>>>>>>>> Okay. It'll be good to have that, because as it is now it's hard to\n>>>>>>>> see the whole picture.\n>>>>>>>>\n>>>>>>>> > On installing dummy_toaster contrib: I've just checked it by\n>>>>>>>> making a patch from commit\n>>>>>>>> > and applying onto my clone of master and 2 patches provided in\n>>>>>>>> previous email without\n>>>>>>>> > any errors and sll checks passed - applying with git am,\n>>>>>>>> configure with debug, cassert,\n>>>>>>>> > depend and enable-tap-tests flags and run checks.\n>>>>>>>> > Please advice what would cause such a behavior?\n>>>>>>>>\n>>>>>>>> I don't think the default pg_upgrade tests will upgrade contrib\n>>>>>>>> objects (there are instructions in src/bin/pg_upgrade/TESTING that\n>>>>>>>> cover manual dumps, if you prefer that method). My manual steps were\n>>>>>>>> roughly\n>>>>>>>>\n>>>>>>>> =# CREATE EXTENSION dummy_toaster;\n>>>>>>>> =# CREATE TABLE test (t TEXT\n>>>>>>>> STORAGE external\n>>>>>>>> TOASTER dummy_toaster_handler);\n>>>>>>>> =# \\q\n>>>>>>>> $ initdb -D newdb\n>>>>>>>> $ pg_ctl -D olddb stop\n>>>>>>>> $ pg_upgrade -b <install path>/bin -B <install path>/bin -d\n>>>>>>>> ./olddb -D ./newdb\n>>>>>>>>\n>>>>>>>> (where <install path>/bin is on the PATH, so we're using the right\n>>>>>>>> binaries).\n>>>>>>>>\n>>>>>>>> Thanks,\n>>>>>>>> --Jacob\n>>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> --\n>>>>>>> Regards,\n>>>>>>> Nikita Malakhov\n>>>>>>> Postgres Professional\n>>>>>>> https://postgrespro.ru/\n>>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> --\n>>>>>> Regards,\n>>>>>> Nikita Malakhov\n>>>>>> Postgres Professional\n>>>>>> https://postgrespro.ru/\n>>>>>>\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Regards,\n>>>>> Nikita Malakhov\n>>>>> Postgres Professional\n>>>>> https://postgrespro.ru/\n>>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Regards,\n>>>> Nikita Malakhov\n>>>> Postgres Professional\n>>>> https://postgrespro.ru/\n>>>>\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Nikita Malakhov\n>>> Postgres Professional\n>>> https://postgrespro.ru/\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> https://postgrespro.ru/\n>>\n>\n>\n> --\n> Regards,\n>\n> --\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 4 Oct 2022 22:41:22 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nDue to recent changes in postgres.h cfbot is failing again.\nHere's rebased patch:\nv22-0001-toaster-interface.patch - Pluggable TOAST API interface with\nreference (original) TOAST mechanics - new API is introduced but\nreference TOAST is still left unchanged;\nv22-0002-toaster-default.patch - Default TOAST mechanics is re-implemented\nusing TOAST API and is plugged in as Default Toaster;\nv22-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\nPlease note that because the development goes on, the actual branch\ncontains much more than is provided here.\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Thu, 13 Oct 2022 02:13:23 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers,\n\nFixed warning that failed build in previous patchset.\nHere's rebased patch:\nv23-0001-toaster-interface.patch - Pluggable TOAST API interface with\nreference (original) TOAST mechanics - new API is introduced but\nreference TOAST is still left unchanged;\nv23-0002-toaster-default.patch - Default TOAST mechanics is re-implemented\nusing TOAST API and is plugged in as Default Toaster;\nv23-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nFor TOAST API explanation please check /src/backend/access/README.toastapi\n\nActual GitHub branch resides at\nhttps://github.com/postgrespro/postgres/tree/toasterapi_clean\nPlease note that because the development goes on, the actual branch\ncontains much more than is provided here.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Thu, 13 Oct 2022 10:31:45 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Here's rebased patch:\n> v23-0001-toaster-interface.patch - Pluggable TOAST API interface with reference (original) TOAST mechanics - new API is introduced but\n> reference TOAST is still left unchanged;\n> v23-0002-toaster-default.patch - Default TOAST mechanics is re-implemented using TOAST API and is plugged in as Default Toaster;\n> v23-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nThanks for keeping the patch up to date.\n\nAs I recall one of the open questions was: how this feature is\nsupposed to work with table access methods? Could you please summarize\nwhat the current consensus is in this respect?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 21 Oct 2022 16:01:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nAleksander, we have had this in mind while developing this feature, and\nhave checked it. Just a slight modification is needed\nto make it work with Pluggable Storage (Access Methods) API.\n\nOn Fri, Oct 21, 2022 at 4:01 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Here's rebased patch:\n> > v23-0001-toaster-interface.patch - Pluggable TOAST API interface with\n> reference (original) TOAST mechanics - new API is introduced but\n> > reference TOAST is still left unchanged;\n> > v23-0002-toaster-default.patch - Default TOAST mechanics is\n> re-implemented using TOAST API and is plugged in as Default Toaster;\n> > v23-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n>\n> Thanks for keeping the patch up to date.\n>\n> As I recall one of the open questions was: how this feature is\n> supposed to work with table access methods? Could you please summarize\n> what the current consensus is in this respect?\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Aleksander, we have had this in mind while developing this feature, and have checked it. Just a slight modification is neededto make it work with Pluggable Storage (Access Methods) API.On Fri, Oct 21, 2022 at 4:01 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Here's rebased patch:\n> v23-0001-toaster-interface.patch - Pluggable TOAST API interface with reference (original) TOAST mechanics - new API is introduced but\n> reference TOAST is still left unchanged;\n> v23-0002-toaster-default.patch - Default TOAST mechanics is re-implemented using TOAST API and is plugged in as Default Toaster;\n> v23-0003-toaster-docs.patch - Pluggable TOAST API documentation package\n\nThanks for keeping the patch up to date.\n\nAs I recall one of the open questions was: how this feature is\nsupposed to work with table access methods? Could you please summarize\nwhat the current consensus is in this respect?\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sat, 22 Oct 2022 01:36:31 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Aleksander, we have had this in mind while developing this feature, and have checked it. Just a slight modification is needed\n> to make it work with Pluggable Storage (Access Methods) API.\n\nCould you please clarify this a little from the architectural point of view?\n\nLet's say company A implements some specific TableAM (in-memory / the\none that uses undo logging / etc). Company B implements an alternative\nTOAST mechanism.\n\nHow the TOAST extension is going to work without knowing any specifics\nof the TableAM the user chooses for the given relation, and vice\nversa? How one of the extensions is going to upgrade / downgrade\nbetween the versions without knowing any implementation details of\nanother?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 22 Oct 2022 11:58:10 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nAleksander, this is a good question.\nIf I understood you correctly, you mean that the alternative TOAST\nmechanism B is using a specific\nTable AM A?\n\nPluggable TOAST API was designed with storage flexibility in mind, and\nCustom TOAST mechanics is\nfree to use any storage methods - we've tested it with some custom Toaster,\nbecause it is completely\nhidden from the caller, and is not limited to Heap, though extensions'\ninterdependencies is a very tricky\nquestion, and surely not the one to be answered quickly.\n\nStill, I have good news on this topic - I'm currently re-working Pluggable\nTOAST in a more OOP-correct\nway, generalizing Table to Toaster relation from column attribute and\nreloptions with separate catalog table\ndescribing Relation,Toaster and TOAST storage entities relations, with lazy\nTOAST Tables creation for\nthe Generic Toaster, and dropping the limits of 1 TOAST table per relation.\nIn current implementation\nToaster OID and TOAST relation ID are stored as a part of Relation, which\nis not the best solution, and\nleaves some Toaster's nuts and bolts open to AM that uses it, and we\ndecided to hide this part into Toaster\ntoo.\n\nThe next logical step is using Table AM API, if Table AM Routine is\nprovided to Toaster, instead of direct\ncalls to Heap AM methods.\n\nThis was thought of in the following way:\nTable AM Routine is passed to Toaster as a parameter, and direct Heap calls\nare replaced with the TAM\nRoutine calls. This is possible, but needs further investigation, because\nTOAST manipulations with data\nrequire, as it is seen from the first dive into TAM API, some extension of\nthis API.\n\nI'll present the results of our research as soon as they're ready.\n\nOn Sat, Oct 22, 2022 at 11:58 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Aleksander, we have had this in mind while developing this feature, and\n> have checked it. Just a slight modification is needed\n> > to make it work with Pluggable Storage (Access Methods) API.\n>\n> Could you please clarify this a little from the architectural point of\n> view?\n>\n> Let's say company A implements some specific TableAM (in-memory / the\n> one that uses undo logging / etc). Company B implements an alternative\n> TOAST mechanism.\n>\n> How the TOAST extension is going to work without knowing any specifics\n> of the TableAM the user chooses for the given relation, and vice\n> versa? How one of the extensions is going to upgrade / downgrade\n> between the versions without knowing any implementation details of\n> another?\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Aleksander, this is a good question. If I understood you correctly, you mean that the alternative TOAST mechanism B is using a specificTable AM A?Pluggable TOAST API was designed with storage flexibility in mind, and Custom TOAST mechanics isfree to use any storage methods - we've tested it with some custom Toaster, because it is completelyhidden from the caller, and is not limited to Heap, though extensions' interdependencies is a very trickyquestion, and surely not the one to be answered quickly.Still, I have good news on this topic - I'm currently re-working Pluggable TOAST in a more OOP-correctway, generalizing Table to Toaster relation from column attribute and reloptions with separate catalog tabledescribing Relation,Toaster and TOAST storage entities relations, with lazy TOAST Tables creation forthe Generic Toaster, and dropping the limits of 1 TOAST table per relation. In current implementationToaster OID and TOAST relation ID are stored as a part of Relation, which is not the best solution, andleaves some Toaster's nuts and bolts open to AM that uses it, and we decided to hide this part into Toastertoo.The next logical step is using Table AM API, if Table AM Routine is provided to Toaster, instead of directcalls to Heap AM methods.This was thought of in the following way:Table AM Routine is passed to Toaster as a parameter, and direct Heap calls are replaced with the TAMRoutine calls. This is possible, but needs further investigation, because TOAST manipulations with datarequire, as it is seen from the first dive into TAM API, some extension of this API.I'll present the results of our research as soon as they're ready.On Sat, Oct 22, 2022 at 11:58 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Aleksander, we have had this in mind while developing this feature, and have checked it. Just a slight modification is needed\n> to make it work with Pluggable Storage (Access Methods) API.\n\nCould you please clarify this a little from the architectural point of view?\n\nLet's say company A implements some specific TableAM (in-memory / the\none that uses undo logging / etc). Company B implements an alternative\nTOAST mechanism.\n\nHow the TOAST extension is going to work without knowing any specifics\nof the TableAM the user chooses for the given relation, and vice\nversa? How one of the extensions is going to upgrade / downgrade\nbetween the versions without knowing any implementation details of\nanother?\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sun, 23 Oct 2022 00:00:12 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Pluggable TOAST API was designed with storage flexibility in mind, and Custom TOAST mechanics is\n> free to use any storage methods\n\nDon't you think that this is an arguable design decision? Basically\nall we know about the underlying TableAM is that it stores tuples\n_somehow_ and that tuples have TIDs [1]. That's it. We don't know if\nit even has any sort of pages, whether they are fixed in size or not,\nwhether it uses shared buffers, etc. It may not even require TOAST.\n(Not to mention the fact that when you have N TOAST implementations\nand M TableAM implementations now you have to run N x M compatibility\ntests. And this doesn't account for different versions of Ns and Ms,\ndifferent platforms and different versions of PostgreSQL.)\n\nI believe the proposed approach is architecturally broken from the beginning.\n\nIt looks like the idea should be actually turned inside out. I.e. what\nwould be nice to have is some sort of _framework_ that helps TableAM\nauthors to implement TOAST (alternatively, the rest of the TableAM\nexcept for TOAST) if the TableAM is similar to the default one. In\nother words the idea is not to implement alternative TOASTers that\nwill work with all possible TableAMs but rather to simplify the task\nof implementing an alternative TableAM which is similar to the default\none except for TOAST. These TableAMs should reuse as much common code\nas possible except for the parts where they differ.\n\nDoes it make sense?\n\nSorry, I realize this will probably imply a complete rewrite of the\npatch. This is the reason why one should start proposing changes from\ngathering the requirements, writing an RFC and run it through several\nrounds of discussion.\n\n[1]: https://www.postgresql.org/docs/current/tableam.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sun, 23 Oct 2022 12:38:06 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nAleksander,\n>Don't you think that this is an arguable design decision? Basically\n>all we know about the underlying TableAM is that it stores tuples\n>_somehow_ and that tuples have TIDs [1]. That's it. We don't know if\n>it even has any sort of pages, whether they are fixed in size or not,\n>whether it uses shared buffers, etc. It may not even require TOAST.\n>(Not to mention the fact that when you have N TOAST implementations\n>and M TableAM implementations now you have to run N x M compatibility\n>tests. And this doesn't account for different versions of Ns and Ms,\n>different platforms and different versions of PostgreSQL.)\n\n>I believe the proposed approach is architecturally broken from the\nbeginning.\n\nExisting TOAST mechanics just works, but for certain types of data it does\nso\nvery poorly, and, let's face it, this mechanics has very strict limitations\nthat limit\noverall capabilities of DBMS, because TOAST was designed when today's\nusual amounts of data were not the case - I mean tables with hundreds of\nbillions of rows, with sizes measured by hundreds of Gb and even by\nTerabytes.\n\nBut TOAST itself is good solution to problem of storing oversized\nattributes, and\nthough it has some limitations - it is unwise to just throw it away, better\nway is to\nmake it up-to-date by revising it, get rid of the most painful limitations\nand allow\nto use different (custom) TOAST strategies for special cases.\n\nThe main idea of Pluggable TOAST is to extend TOAST capabilities by\nproviding\ncommon API allowing to uniformly use different strategies to TOAST\ndifferent data.\nWith the acronym \"TOAST\" I mean that data would be stored externally to\nsource\ntable, somewhere only its Toaster know where and how - it may be regular\nHeap\ntables, Heap tables with different table structure, some other AM tables,\nfiles outside\nof the database, even files on different storage systems. Pluggable TOAST\nallows\nusing advanced compression methods and complex operations on externally\nstored\ndata, like search without fully de-TOASTing data, etc.\n\nAlso, existing TOAST is a part of Heap AM and is restricted to use Heap\nonly.\nTo make it extensible - we have to separate TOAST from Heap AM. Default\nTOAST\nin Pluggable TOAST still uses Heap, but Heap knows nothing about TOAST. It\nfits\nperfectly in OOP paradigms\n\n>It looks like the idea should be actually turned inside out. I.e. what\n>would be nice to have is some sort of _framework_ that helps TableAM\n>authors to implement TOAST (alternatively, the rest of the TableAM\n>except for TOAST) if the TableAM is similar to the default one. In\n>other words the idea is not to implement alternative TOASTers that\n>will work with all possible TableAMs but rather to simplify the task\n>of implementing an alternative TableAM which is similar to the default\n>one except for TOAST. These TableAMs should reuse as much common code\n>as possible except for the parts where they differ.\n\nTo implement different TOAST strategies you must have an API to plug them\nin,\notherwise for each strategy you'd have to change the core. TOAST API allows\nto plug\nin custom TOAST strategies just by adding contrib modules, once the API is\nmerged\ninto the core. I have to make a point that different TOAST strategies do\nnot have\nto store data with other TAMs, they just could store these data in Heap but\nusing\nknowledge of internal data structure of workflow to store them in a more\noptimal\nway - like fast and partially compressed and decompressed JSON, lots of\nlarge\nchunks of binary data stored in the database (as you know, largeobjects are\nnot\nof much help with this) and so on.\n\nImplementing another Table AM just to implement another TOAST strategy\nseems too\nmuch, the TAM API is very heavy and complex, and you would have to add it\nas a contrib.\nLots of different TAMs would cause much more problems than lots of Toasters\nbecause\nsuch a solution results in data incompatibility between installations with\ndifferent TAMs\nand some minor changes in custom TAM contrib could lead to losing all data\nstored with\nthis TAM, but with custom TOAST you (in the worst case) could lose just\nTOASTed data\n and nothing else.\n\nWe have lots of requests from clients and tickets related to TOAST\nlimitations and\nextending Postgres this way - this growing need made us develop Pluggable\nTOAST.\n\n\n\nOn Sun, Oct 23, 2022 at 12:38 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Pluggable TOAST API was designed with storage flexibility in mind, and\n> Custom TOAST mechanics is\n> > free to use any storage methods\n>\n> Don't you think that this is an arguable design decision? Basically\n> all we know about the underlying TableAM is that it stores tuples\n> _somehow_ and that tuples have TIDs [1]. That's it. We don't know if\n> it even has any sort of pages, whether they are fixed in size or not,\n> whether it uses shared buffers, etc. It may not even require TOAST.\n> (Not to mention the fact that when you have N TOAST implementations\n> and M TableAM implementations now you have to run N x M compatibility\n> tests. And this doesn't account for different versions of Ns and Ms,\n> different platforms and different versions of PostgreSQL.)\n>\n> I believe the proposed approach is architecturally broken from the\n> beginning.\n>\n> It looks like the idea should be actually turned inside out. I.e. what\n> would be nice to have is some sort of _framework_ that helps TableAM\n> authors to implement TOAST (alternatively, the rest of the TableAM\n> except for TOAST) if the TableAM is similar to the default one. In\n> other words the idea is not to implement alternative TOASTers that\n> will work with all possible TableAMs but rather to simplify the task\n> of implementing an alternative TableAM which is similar to the default\n> one except for TOAST. These TableAMs should reuse as much common code\n> as possible except for the parts where they differ.\n>\n> Does it make sense?\n>\n> Sorry, I realize this will probably imply a complete rewrite of the\n> patch. This is the reason why one should start proposing changes from\n> gathering the requirements, writing an RFC and run it through several\n> rounds of discussion.\n>\n> [1]: https://www.postgresql.org/docs/current/tableam.html\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Aleksander,>Don't you think that this is an arguable design decision? Basically>all we know about the underlying TableAM is that it stores tuples>_somehow_ and that tuples have TIDs [1]. That's it. We don't know if>it even has any sort of pages, whether they are fixed in size or not,>whether it uses shared buffers, etc. It may not even require TOAST.>(Not to mention the fact that when you have N TOAST implementations>and M TableAM implementations now you have to run N x M compatibility>tests. And this doesn't account for different versions of Ns and Ms,>different platforms and different versions of PostgreSQL.)>I believe the proposed approach is architecturally broken from the beginning.Existing TOAST mechanics just works, but for certain types of data it does sovery poorly, and, let's face it, this mechanics has very strict limitations that limitoverall capabilities of DBMS, because TOAST was designed when today'susual amounts of data were not the case - I mean tables with hundreds ofbillions of rows, with sizes measured by hundreds of Gb and even by Terabytes.But TOAST itself is good solution to problem of storing oversized attributes, andthough it has some limitations - it is unwise to just throw it away, better way is tomake it up-to-date by revising it, get rid of the most painful limitations and allowto use different (custom) TOAST strategies for special cases.The main idea of Pluggable TOAST is to extend TOAST capabilities by providingcommon API allowing to uniformly use different strategies to TOAST different data.With the acronym \"TOAST\" I mean that data would be stored externally to sourcetable, somewhere only its Toaster know where and how - it may be regular Heaptables, Heap tables with different table structure, some other AM tables, files outsideof the database, even files on different storage systems. Pluggable TOAST allowsusing advanced compression methods and complex operations on externally storeddata, like search without fully de-TOASTing data, etc.Also, existing TOAST is a part of Heap AM and is restricted to use Heap only.To make it extensible - we have to separate TOAST from Heap AM. Default TOASTin Pluggable TOAST still uses Heap, but Heap knows nothing about TOAST. It fitsperfectly in OOP paradigms>It looks like the idea should be actually turned inside out. I.e. what>would be nice to have is some sort of _framework_ that helps TableAM>authors to implement TOAST (alternatively, the rest of the TableAM>except for TOAST) if the TableAM is similar to the default one. In>other words the idea is not to implement alternative TOASTers that>will work with all possible TableAMs but rather to simplify the task>of implementing an alternative TableAM which is similar to the default>one except for TOAST. These TableAMs should reuse as much common code>as possible except for the parts where they differ.To implement different TOAST strategies you must have an API to plug them in,otherwise for each strategy you'd have to change the core. TOAST API allows to plugin custom TOAST strategies just by adding contrib modules, once the API is mergedinto the core. I have to make a point that different TOAST strategies do not haveto store data with other TAMs, they just could store these data in Heap but usingknowledge of internal data structure of workflow to store them in a more optimalway - like fast and partially compressed and decompressed JSON, lots of largechunks of binary data stored in the database (as you know, largeobjects are notof much help with this) and so on.Implementing another Table AM just to implement another TOAST strategy seems toomuch, the TAM API is very heavy and complex, and you would have to add it as a contrib.Lots of different TAMs would cause much more problems than lots of Toasters becausesuch a solution results in data incompatibility between installations with different TAMsand some minor changes in custom TAM contrib could lead to losing all data stored withthis TAM, but with custom TOAST you (in the worst case) could lose just TOASTed data and nothing else.We have lots of requests from clients and tickets related to TOAST limitations andextending Postgres this way - this growing need made us develop Pluggable TOAST.On Sun, Oct 23, 2022 at 12:38 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Pluggable TOAST API was designed with storage flexibility in mind, and Custom TOAST mechanics is\n> free to use any storage methods\n\nDon't you think that this is an arguable design decision? Basically\nall we know about the underlying TableAM is that it stores tuples\n_somehow_ and that tuples have TIDs [1]. That's it. We don't know if\nit even has any sort of pages, whether they are fixed in size or not,\nwhether it uses shared buffers, etc. It may not even require TOAST.\n(Not to mention the fact that when you have N TOAST implementations\nand M TableAM implementations now you have to run N x M compatibility\ntests. And this doesn't account for different versions of Ns and Ms,\ndifferent platforms and different versions of PostgreSQL.)\n\nI believe the proposed approach is architecturally broken from the beginning.\n\nIt looks like the idea should be actually turned inside out. I.e. what\nwould be nice to have is some sort of _framework_ that helps TableAM\nauthors to implement TOAST (alternatively, the rest of the TableAM\nexcept for TOAST) if the TableAM is similar to the default one. In\nother words the idea is not to implement alternative TOASTers that\nwill work with all possible TableAMs but rather to simplify the task\nof implementing an alternative TableAM which is similar to the default\none except for TOAST. These TableAMs should reuse as much common code\nas possible except for the parts where they differ.\n\nDoes it make sense?\n\nSorry, I realize this will probably imply a complete rewrite of the\npatch. This is the reason why one should start proposing changes from\ngathering the requirements, writing an RFC and run it through several\nrounds of discussion.\n\n[1]: https://www.postgresql.org/docs/current/tableam.html\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sun, 23 Oct 2022 23:38:13 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\nI don't argue with most of what you say. I am just pointing out the\nreason why the chosen approach \"N TOASTers x M TableAMs\" will not\nwork:\n\n> Don't you think that this is an arguable design decision? Basically\n> all we know about the underlying TableAM is that it stores tuples\n> _somehow_ and that tuples have TIDs [1]. That's it. We don't know if\n> it even has any sort of pages, whether they are fixed in size or not,\n> whether it uses shared buffers, etc. It may not even require TOAST.\n> [...]\n\nAlso I completely agree with:\n\n> Implementing another Table AM just to implement another TOAST strategy seems too\n> much, the TAM API is very heavy and complex, and you would have to add it as a contrib.\n\nThis is what I meant above when talking about the framework for\nsimplifying this task:\n\n> It looks like the idea should be actually turned inside out. I.e. what\n> would be nice to have is some sort of _framework_ that helps TableAM\n> authors to implement TOAST (alternatively, the rest of the TableAM\n> except for TOAST) if the TableAM is similar to the default one.\n\n From the user perspective it's much easier to think about one entity -\nTableAM, and choosing from heapam_with_default_toast and\nheapam_with_different_toast.\n\n From the extension implementer point of view creating TableAMs is a\ndifficult task. This is what the framework should address. Ideally the\ninterface should be as simple as:\n\nCreateParametrizedDefaultHeapAM(SomeTOASTSubstitutionObject, ...other\narguments, in the future...)\n\nWhere the extension author should be worried only about an alternative\nTOAST implementation.\n\nI think at some point such a framework may address at least one more\nissue we have - an inability to change the page size on the table\nlevel. As it was shown by Tomas Vondra [1] the default 8 KB page size\ncan be suboptimal depending on the load. So it would be nice if the\nuser could change it without rebuilding PostgreSQL. Naturally this is\nout of scope of this particular patchset. I just wanted to point out\nopportunities we have here.\n\n[1]: https://www.postgresql.org/message-id/flat/b4861449-6c54-ccf8-e67c-c039228cdc6d%40enterprisedb.com\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 24 Oct 2022 12:10:18 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\n>I don't argue with most of what you say. I am just pointing out the\n>reason why the chosen approach \"N TOASTers x M TableAMs\" will not\n>work:\n\nWe assume that TAM used in custom Toaster works as it is should work,\nand leave TAM internals to this TAM developer - say, we do not want to\nchange internals of Heap AM.\n\nWe don't want to create some kind of silver bullet. There are already\nexisting\nand widely-known (from production environments) problems with TOAST\nmechanics, and we suggest not too complex way to solve them.\n\nAs I mentioned before, Pluggable TOAST does not change Heap AM, it is\nnot minded to change TAMs.\n\n>This is what I meant above when talking about the framework for\n>simplifying this task:\n\nThat's a kind of generalizing custom TOAST implementation. It is very\ngood intention, but keep in mind that different kinds of data require very\ndifferent approach to external storage - say, JSON TOAST works with\nmaps of keys and values, super binary object (experimental name) does\nnot work with internals of TOASTed data except searching. But, we thought\n about that too and reusable code resides in toast_internals.c source - any\ncustom Toaster working with Heap could use it's insert, update and fetch\nmethods, but deal with data on it's own.\n\nEven with the general framework there must be a common interface which\nwould be the entry point for those custom methods developed with the\nframework. That's what the TOAST API is - just an interface that all custom\nTOAST implementations use to have a common entry point from any TAM,\nwith syntax support to plug in custom TOAST implementations from the SQL.\nNo less, but no more.\n\nMoreover, our patches show that even Generic (default) TOAST implementation\ncould still be left as-is, without necessity to route it via our API,\nthough it is logically\nwrong because common API is meant to be common for all TOAST implementations\nwithout exceptions.\n\nHave I answered your question? Please don't hesitate to point to any unclear\nparts, I'd be glad to explain that.\n\nThe main idea in TOAST API is very elegant and light, and it is designed\nalike\nto Pluggable Storage (Table AM API).\n\nOn Mon, Oct 24, 2022 at 12:10 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> I don't argue with most of what you say. I am just pointing out the\n> reason why the chosen approach \"N TOASTers x M TableAMs\" will not\n> work:\n>\n> > Don't you think that this is an arguable design decision? Basically\n> > all we know about the underlying TableAM is that it stores tuples\n> > _somehow_ and that tuples have TIDs [1]. That's it. We don't know if\n> > it even has any sort of pages, whether they are fixed in size or not,\n> > whether it uses shared buffers, etc. It may not even require TOAST.\n> > [...]\n>\n> Also I completely agree with:\n>\n> > Implementing another Table AM just to implement another TOAST strategy\n> seems too\n> > much, the TAM API is very heavy and complex, and you would have to add\n> it as a contrib.\n>\n> This is what I meant above when talking about the framework for\n> simplifying this task:\n>\n> > It looks like the idea should be actually turned inside out. I.e. what\n> > would be nice to have is some sort of _framework_ that helps TableAM\n> > authors to implement TOAST (alternatively, the rest of the TableAM\n> > except for TOAST) if the TableAM is similar to the default one.\n>\n> From the user perspective it's much easier to think about one entity -\n> TableAM, and choosing from heapam_with_default_toast and\n> heapam_with_different_toast.\n>\n> From the extension implementer point of view creating TableAMs is a\n> difficult task. This is what the framework should address. Ideally the\n> interface should be as simple as:\n>\n> CreateParametrizedDefaultHeapAM(SomeTOASTSubstitutionObject, ...other\n> arguments, in the future...)\n>\n> Where the extension author should be worried only about an alternative\n> TOAST implementation.\n>\n> I think at some point such a framework may address at least one more\n> issue we have - an inability to change the page size on the table\n> level. As it was shown by Tomas Vondra [1] the default 8 KB page size\n> can be suboptimal depending on the load. So it would be nice if the\n> user could change it without rebuilding PostgreSQL. Naturally this is\n> out of scope of this particular patchset. I just wanted to point out\n> opportunities we have here.\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/b4861449-6c54-ccf8-e67c-c039228cdc6d%40enterprisedb.com\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!>I don't argue with most of what you say. I am just pointing out the>reason why the chosen approach \"N TOASTers x M TableAMs\" will not>work:We assume that TAM used in custom Toaster works as it is should work,and leave TAM internals to this TAM developer - say, we do not want tochange internals of Heap AM.We don't want to create some kind of silver bullet. There are already existingand widely-known (from production environments) problems with TOASTmechanics, and we suggest not too complex way to solve them.As I mentioned before, Pluggable TOAST does not change Heap AM, it isnot minded to change TAMs. >This is what I meant above when talking about the framework for>simplifying this task:That's a kind of generalizing custom TOAST implementation. It is verygood intention, but keep in mind that different kinds of data require verydifferent approach to external storage - say, JSON TOAST works withmaps of keys and values, super binary object (experimental name) doesnot work with internals of TOASTed data except searching. But, we thought about that too and reusable code resides in toast_internals.c source - anycustom Toaster working with Heap could use it's insert, update and fetchmethods, but deal with data on it's own.Even with the general framework there must be a common interface whichwould be the entry point for those custom methods developed with theframework. That's what the TOAST API is - just an interface that all customTOAST implementations use to have a common entry point from any TAM,with syntax support to plug in custom TOAST implementations from the SQL.No less, but no more.Moreover, our patches show that even Generic (default) TOAST implementationcould still be left as-is, without necessity to route it via our API, though it is logicallywrong because common API is meant to be common for all TOAST implementationswithout exceptions.Have I answered your question? Please don't hesitate to point to any unclearparts, I'd be glad to explain that.The main idea in TOAST API is very elegant and light, and it is designed aliketo Pluggable Storage (Table AM API).On Mon, Oct 24, 2022 at 12:10 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\nI don't argue with most of what you say. I am just pointing out the\nreason why the chosen approach \"N TOASTers x M TableAMs\" will not\nwork:\n\n> Don't you think that this is an arguable design decision? Basically\n> all we know about the underlying TableAM is that it stores tuples\n> _somehow_ and that tuples have TIDs [1]. That's it. We don't know if\n> it even has any sort of pages, whether they are fixed in size or not,\n> whether it uses shared buffers, etc. It may not even require TOAST.\n> [...]\n\nAlso I completely agree with:\n\n> Implementing another Table AM just to implement another TOAST strategy seems too\n> much, the TAM API is very heavy and complex, and you would have to add it as a contrib.\n\nThis is what I meant above when talking about the framework for\nsimplifying this task:\n\n> It looks like the idea should be actually turned inside out. I.e. what\n> would be nice to have is some sort of _framework_ that helps TableAM\n> authors to implement TOAST (alternatively, the rest of the TableAM\n> except for TOAST) if the TableAM is similar to the default one.\n\n From the user perspective it's much easier to think about one entity -\nTableAM, and choosing from heapam_with_default_toast and\nheapam_with_different_toast.\n\n From the extension implementer point of view creating TableAMs is a\ndifficult task. This is what the framework should address. Ideally the\ninterface should be as simple as:\n\nCreateParametrizedDefaultHeapAM(SomeTOASTSubstitutionObject, ...other\narguments, in the future...)\n\nWhere the extension author should be worried only about an alternative\nTOAST implementation.\n\nI think at some point such a framework may address at least one more\nissue we have - an inability to change the page size on the table\nlevel. As it was shown by Tomas Vondra [1] the default 8 KB page size\ncan be suboptimal depending on the load. So it would be nice if the\nuser could change it without rebuilding PostgreSQL. Naturally this is\nout of scope of this particular patchset. I just wanted to point out\nopportunities we have here.\n\n[1]: https://www.postgresql.org/message-id/flat/b4861449-6c54-ccf8-e67c-c039228cdc6d%40enterprisedb.com\n\n--\nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 24 Oct 2022 14:16:07 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> >I don't argue with most of what you say. I am just pointing out the\n> >reason why the chosen approach \"N TOASTers x M TableAMs\" will not\n> >work:\n>\n> We assume that TAM used in custom Toaster works as it is should work,\n> and leave TAM internals to this TAM developer - say, we do not want to\n> change internals of Heap AM.\n>\n> We don't want to create some kind of silver bullet.\n\nThis is exactly the point. In order to not to create a silver bullet,\nTOASTers should be limited to a single TableAM. The one we know uses\npages of a known fixed size, the one that actually requires TOAST\nbecause pages are relatively small, etc.\n\n> That's what the TOAST API is - just an interface that all custom\n> TOAST implementations use to have a common entry point from any TAM,\n> [...]\n\nI believe this is the source of misunderstanding. Note that not _any_\nTableAM needs TOAST to begin with. As an example, if one chooses to\nimplement a column-organized TableAM that stores all text/bytea\nattributes in a separate dictionary file this person doesn't need\nTOAST and doesn't want to be constrained by the need of choosing one.\n\nFor this reason the \"N TOASTers x M TableAMs\" approach is\narchitecturally broken.\n\n> keep in mind that different kinds of data require very\n> different approach to external storage - say, JSON TOAST works with\n> maps of keys and values, [...]\n\nTo clarify: is an ability to specify TOASTers for given columns and/or\ntypes also part of the plan?\n\n> Have I answered your question? Please don't hesitate to point to any unclear\n> parts, I'd be glad to explain that.\n\nNo. To be honest, it looks like you are merely discarding most/any\nfeedback the community provided so far.\n\nI really think that pluggable TOASTers would be a great feature.\nHowever if the goal is to get it into the core I doubt that we are\ngoing to make much progress with the current approach.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 24 Oct 2022 14:55:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n>This is exactly the point. In order to not to create a silver bullet,\n>TOASTers should be limited to a single TableAM. The one we know uses\n>pages of a known fixed size, the one that actually requires TOAST\n>because pages are relatively small, etc.\n\nCurrently all our TOAST implementations use Heap AM, except ones\nthat use external (truly external, i.e. files outside DB) storage. Using\nTable AM\nRoutine and routing AM methods calls via it is a topic for further\ndiscussion,\nif Pluggable TOAST will be committed. And even then it would be an open\nissue.\n\n>I believe this is the source of misunderstanding. Note that not _any_\n>TableAM needs TOAST to begin with. As an example, if one chooses to\n>implement a column-organized TableAM that stores all text/bytea\n>attributes in a separate dictionary file this person doesn't need\n>TOAST and doesn't want to be constrained by the need of choosing one.\n\n>For this reason the \"N TOASTers x M TableAMs\" approach is\n>architecturally broken.\n\nTOAST implementation is not necessary for Table AM. And TOAST API is just\nan optional open interface - SET TOASTER is an option for CREATE/ALTER\nTABLE command. In previous discussion we haven't mentioned an approach\n\"N TOASTers x M TableAMs\".\n\n>To clarify: is an ability to specify TOASTers for given columns and/or\n>types also part of the plan?\n\nFor table columns it is already supported by the syntax part of the TOAST\nAPI.\nFor data types we reserved the validation part of the API, but this support\nis still a\nsubject for discussion, although we think it will be very handy for DB\nusers, like\nwe issue something like:\nCREATE TYPE ... TOASTER=jsonb_toaster ... ;\nor\nALTER TYPE JSONB SET TOASTER jsonb_toaster;\nand do not have to set special toaster for jsonb column each time we create\nor alter a table with it.\n\n>No. To be honest, it looks like you are merely discarding most/any\n>feedback the community provided so far.\n\nVery sorry to read that. Almost all of the requests in this discussion have\nbeen taken\ninto account in patches, and the most serious one - I mean pg_attribute\nexpansion\nwhich was mentioned by Tom Lane and Robert Haas - is being fixed right now\nand\nwill be ready very soon.\n\n>I really think that pluggable TOASTers would be a great feature.\n>However if the goal is to get it into the core I doubt that we are\n>going to make much progress with the current approach.\n\nWe hope we will. This feature is very demanded by end-users, and will be\neven more\nas time goes by - current TOAST limitations and how they affect DBMS\nperformance is\na serious drawback in comparison to competitors.\n\nOn Mon, Oct 24, 2022 at 2:55 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > >I don't argue with most of what you say. I am just pointing out the\n> > >reason why the chosen approach \"N TOASTers x M TableAMs\" will not\n> > >work:\n> >\n> > We assume that TAM used in custom Toaster works as it is should work,\n> > and leave TAM internals to this TAM developer - say, we do not want to\n> > change internals of Heap AM.\n> >\n> > We don't want to create some kind of silver bullet.\n>\n> This is exactly the point. In order to not to create a silver bullet,\n> TOASTers should be limited to a single TableAM. The one we know uses\n> pages of a known fixed size, the one that actually requires TOAST\n> because pages are relatively small, etc.\n>\n> > That's what the TOAST API is - just an interface that all custom\n> > TOAST implementations use to have a common entry point from any TAM,\n> > [...]\n>\n> I believe this is the source of misunderstanding. Note that not _any_\n> TableAM needs TOAST to begin with. As an example, if one chooses to\n> implement a column-organized TableAM that stores all text/bytea\n> attributes in a separate dictionary file this person doesn't need\n> TOAST and doesn't want to be constrained by the need of choosing one.\n>\n> For this reason the \"N TOASTers x M TableAMs\" approach is\n> architecturally broken.\n>\n> > keep in mind that different kinds of data require very\n> > different approach to external storage - say, JSON TOAST works with\n> > maps of keys and values, [...]\n>\n> To clarify: is an ability to specify TOASTers for given columns and/or\n> types also part of the plan?\n>\n> > Have I answered your question? Please don't hesitate to point to any\n> unclear\n> > parts, I'd be glad to explain that.\n>\n> No. To be honest, it looks like you are merely discarding most/any\n> feedback the community provided so far.\n>\n> I really think that pluggable TOASTers would be a great feature.\n> However if the goal is to get it into the core I doubt that we are\n> going to make much progress with the current approach.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,>This is exactly the point. In order to not to create a silver bullet,>TOASTers should be limited to a single TableAM. The one we know uses>pages of a known fixed size, the one that actually requires TOAST>because pages are relatively small, etc.Currently all our TOAST implementations use Heap AM, except onesthat use external (truly external, i.e. files outside DB) storage. Using Table AMRoutine and routing AM methods calls via it is a topic for further discussion,if Pluggable TOAST will be committed. And even then it would be an open issue.>I believe this is the source of misunderstanding. Note that not _any_>TableAM needs TOAST to begin with. As an example, if one chooses to>implement a column-organized TableAM that stores all text/bytea>attributes in a separate dictionary file this person doesn't need>TOAST and doesn't want to be constrained by the need of choosing one.>For this reason the \"N TOASTers x M TableAMs\" approach is>architecturally broken.TOAST implementation is not necessary for Table AM. And TOAST API is justan optional open interface - SET TOASTER is an option for CREATE/ALTER TABLE command. In previous discussion we haven't mentioned an approach\"N TOASTers x M TableAMs\".>To clarify: is an ability to specify TOASTers for given columns and/or>types also part of the plan?For table columns it is already supported by the syntax part of the TOAST API.For data types we reserved the validation part of the API, but this support is still asubject for discussion, although we think it will be very handy for DB users, likewe issue something like:CREATE TYPE ... TOASTER=jsonb_toaster ... ;orALTER TYPE JSONB SET TOASTER jsonb_toaster;and do not have to set special toaster for jsonb column each time we createor alter a table with it.>No. To be honest, it looks like you are merely discarding most/any>feedback the community provided so far.Very sorry to read that. Almost all of the requests in this discussion have been takeninto account in patches, and the most serious one - I mean pg_attribute expansionwhich was mentioned by Tom Lane and Robert Haas - is being fixed right now andwill be ready very soon.>I really think that pluggable TOASTers would be a great feature.>However if the goal is to get it into the core I doubt that we are>going to make much progress with the current approach.We hope we will. This feature is very demanded by end-users, and will be even moreas time goes by - current TOAST limitations and how they affect DBMS performance isa serious drawback in comparison to competitors.On Mon, Oct 24, 2022 at 2:55 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> >I don't argue with most of what you say. I am just pointing out the\n> >reason why the chosen approach \"N TOASTers x M TableAMs\" will not\n> >work:\n>\n> We assume that TAM used in custom Toaster works as it is should work,\n> and leave TAM internals to this TAM developer - say, we do not want to\n> change internals of Heap AM.\n>\n> We don't want to create some kind of silver bullet.\n\nThis is exactly the point. In order to not to create a silver bullet,\nTOASTers should be limited to a single TableAM. The one we know uses\npages of a known fixed size, the one that actually requires TOAST\nbecause pages are relatively small, etc.\n\n> That's what the TOAST API is - just an interface that all custom\n> TOAST implementations use to have a common entry point from any TAM,\n> [...]\n\nI believe this is the source of misunderstanding. Note that not _any_\nTableAM needs TOAST to begin with. As an example, if one chooses to\nimplement a column-organized TableAM that stores all text/bytea\nattributes in a separate dictionary file this person doesn't need\nTOAST and doesn't want to be constrained by the need of choosing one.\n\nFor this reason the \"N TOASTers x M TableAMs\" approach is\narchitecturally broken.\n\n> keep in mind that different kinds of data require very\n> different approach to external storage - say, JSON TOAST works with\n> maps of keys and values, [...]\n\nTo clarify: is an ability to specify TOASTers for given columns and/or\ntypes also part of the plan?\n\n> Have I answered your question? Please don't hesitate to point to any unclear\n> parts, I'd be glad to explain that.\n\nNo. To be honest, it looks like you are merely discarding most/any\nfeedback the community provided so far.\n\nI really think that pluggable TOASTers would be a great feature.\nHowever if the goal is to get it into the core I doubt that we are\ngoing to make much progress with the current approach.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 24 Oct 2022 15:59:44 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Using Table AM Routine and routing AM methods calls via it is a topic for further discussion,\n> if Pluggable TOAST will be committed. [...] And even then it would be an open issue.\n\n From personal experience with the project I have serious doubts this\nis going to happen. Before such invasive changes are going to be\naccepted there should be a clear understanding of how exactly TOASTers\nare supposed to be used. This should be part of the documentation in\nthe patchset. Additionally there should be an open-soruce or\nsource-available extension that actually demonstrates the benefits of\nTOASTers with reproducible benchmarks (we didn't even get to that part\nyet).\n\n> TOAST implementation is not necessary for Table AM.\n\nWhat other use cases for TOAST do you have in mind?\n\n>> > Have I answered your question? Please don't hesitate to point to any unclear\n>> > parts, I'd be glad to explain that.\n>>\n>> No. To be honest, it looks like you are merely discarding most/any\n>> feedback the community provided so far.\n>>\n>> I really think that pluggable TOASTers would be a great feature.\n>> However if the goal is to get it into the core I doubt that we are\n>> going to make much progress with the current approach.\n\nTo clarify, the concern about \"N TOASTers vs M TableAM\" was expressed\nby Robert Haas back in Jan 2022:\n\n> I agree ... but I'm also worried about what happens when we have\n> multiple table AMs. One can imagine a new table AM that is\n> specifically optimized for TOAST which can be used with an existing\n> heap table. One can imagine a new table AM for the main table that\n> wants to use something different for TOAST. So, I don't think it's\n> right to imagine that the choice of TOASTer depends solely on the\n> column data type. I'm not really sure how this should work exactly ...\n> but it needs careful thought.\n\nThis is the most important open question so far to my knowledge. It\nwas never addressed, it doesn't seem like there is a plan of doing so,\nthe suggested alternative approach was ignored, nor are there any\nstrong arguments that would defend this design choice and/or criticize\nthe alternative one (other than general words \"don't worry we know\nwhat we are doing\").\n\nThis what I mean by the community feedback being discarded.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 24 Oct 2022 16:53:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\n>From personal experience with the project I have serious doubts this\n>is going to happen. Before such invasive changes are going to be\n>accepted there should be a clear understanding of how exactly TOASTers\n>are supposed to be used. This should be part of the documentation in\n>the patchset. Additionally there should be an open-soruce or\n>source-available extension that actually demonstrates the benefits of\n>TOASTers with reproducible benchmarks (we didn't even get to that part\n>yet).\n\nActually, there's a documentation part in the patchset. Also, there is\nREADME file\nexplaining the API.\nIn addition, we have several custom TOAST implementations with some\nresults - they were published and presented on PgCon. I was asked to exclude\ncustom TOAST implementations and some further improvements for the first\niteration, that's why currently the patchset consists only of 3 patches -\nbase\ncore changes, default TOAST implementation via TOAST API and documentation\npackage.\n\n>What other use cases for TOAST do you have in mind?\n\nThe main use case is the same as for the TOAST mechanism - storing and\nretrieving\noversized data. But we expanded this case with some details -\n- update TOASTed data (yes, current TOAST implementation cannot update\nstored\ndata - is marks whole TOASTED object as dead and stores new one);\n- retrieve part of the stored data chunks without fully de-TOASTing stored\ndata (even\nwith existing TOAST this will be painful if you have to get just a small\npart of the several\n hundreds Mb sized object);\n- be able to store objects of size larger than 1 Gb;\n- store more than 4 Tb of TOASTed data for one table;\n- optimize storage for fast search and retrieval of parts of TOASTed object\n- this is\nmust-have for effectively using JSON, PostgreSQL already is in catching-up\nposition\nin JSON performance field.\n\nFor all this cases we have test results that show improvements in storage\nand performance.\n\n>To clarify, the concern about \"N TOASTers vs M TableAM\" was expressed\n>by Robert Haas back in Jan 2022:\n\n>> I agree ... but I'm also worried about what happens when we have\n>> multiple table AMs. One can imagine a new table AM that is\n>> specifically optimized for TOAST which can be used with an existing\n>> heap table. One can imagine a new table AM for the main table that\n>> wants to use something different for TOAST. So, I don't think it's\n>> right to imagine that the choice of TOASTer depends solely on the\n>> column data type. I'm not really sure how this should work exactly ...\n>> but it needs careful thought.\n\n>This is the most important open question so far to my knowledge. It\n>was never addressed, it doesn't seem like there is a plan of doing so,\n>the suggested alternative approach was ignored, nor are there any\n>strong arguments that would defend this design choice and/or criticize\n>the alternative one (other than general words \"don't worry we know\n>what we are doing\").\n\n>This what I mean by the community feedback being discarded.\n\nMaybe there was some misunderstanding, I was new to this project and\ncompany at that time - I was introduced to is in the middle of December\n2021, but Theodor Sigaev gave an answer to Mr. Haas:\n\n>Right. that's why we propose a validate method (may be, it's a wrong\n>name, but I don't known better one) which accepts several arguments, one\n>of which is table AM oid. If that method returns false then toaster\n>isn't useful with current TAM, storage or/and compression kinds, etc.\n\nAnd this is generalized and correct from the OOP POV mean to provide a\nway to ensure that this concrete TOAST implementation is valid for Table AM\ncalling it.\n\n\nOn Mon, Oct 24, 2022 at 4:53 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Using Table AM Routine and routing AM methods calls via it is a topic\n> for further discussion,\n> > if Pluggable TOAST will be committed. [...] And even then it would be an\n> open issue.\n>\n> From personal experience with the project I have serious doubts this\n> is going to happen. Before such invasive changes are going to be\n> accepted there should be a clear understanding of how exactly TOASTers\n> are supposed to be used. This should be part of the documentation in\n> the patchset. Additionally there should be an open-soruce or\n> source-available extension that actually demonstrates the benefits of\n> TOASTers with reproducible benchmarks (we didn't even get to that part\n> yet).\n>\n> > TOAST implementation is not necessary for Table AM.\n>\n> What other use cases for TOAST do you have in mind?\n>\n> >> > Have I answered your question? Please don't hesitate to point to any\n> unclear\n> >> > parts, I'd be glad to explain that.\n> >>\n> >> No. To be honest, it looks like you are merely discarding most/any\n> >> feedback the community provided so far.\n> >>\n> >> I really think that pluggable TOASTers would be a great feature.\n> >> However if the goal is to get it into the core I doubt that we are\n> >> going to make much progress with the current approach.\n>\n> To clarify, the concern about \"N TOASTers vs M TableAM\" was expressed\n> by Robert Haas back in Jan 2022:\n>\n> > I agree ... but I'm also worried about what happens when we have\n> > multiple table AMs. One can imagine a new table AM that is\n> > specifically optimized for TOAST which can be used with an existing\n> > heap table. One can imagine a new table AM for the main table that\n> > wants to use something different for TOAST. So, I don't think it's\n> > right to imagine that the choice of TOASTer depends solely on the\n> > column data type. I'm not really sure how this should work exactly ...\n> > but it needs careful thought.\n>\n> This is the most important open question so far to my knowledge. It\n> was never addressed, it doesn't seem like there is a plan of doing so,\n> the suggested alternative approach was ignored, nor are there any\n> strong arguments that would defend this design choice and/or criticize\n> the alternative one (other than general words \"don't worry we know\n> what we are doing\").\n>\n> This what I mean by the community feedback being discarded.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!>From personal experience with the project I have serious doubts this>is going to happen. Before such invasive changes are going to be>accepted there should be a clear understanding of how exactly TOASTers>are supposed to be used. This should be part of the documentation in>the patchset. Additionally there should be an open-soruce or>source-available extension that actually demonstrates the benefits of>TOASTers with reproducible benchmarks (we didn't even get to that part>yet).Actually, there's a documentation part in the patchset. Also, there is README fileexplaining the API.In addition, we have several custom TOAST implementations with someresults - they were published and presented on PgCon. I was asked to excludecustom TOAST implementations and some further improvements for the firstiteration, that's why currently the patchset consists only of 3 patches - basecore changes, default TOAST implementation via TOAST API and documentationpackage.>What other use cases for TOAST do you have in mind?The main use case is the same as for the TOAST mechanism - storing and retrievingoversized data. But we expanded this case with some details - - update TOASTed data (yes, current TOAST implementation cannot update storeddata - is marks whole TOASTED object as dead and stores new one);- retrieve part of the stored data chunks without fully de-TOASTing stored data (evenwith existing TOAST this will be painful if you have to get just a small part of the several hundreds Mb sized object);- be able to store objects of size larger than 1 Gb;- store more than 4 Tb of TOASTed data for one table;- optimize storage for fast search and retrieval of parts of TOASTed object - this ismust-have for effectively using JSON, PostgreSQL already is in catching-up positionin JSON performance field.For all this cases we have test results that show improvements in storage and performance.>To clarify, the concern about \"N TOASTers vs M TableAM\" was expressed>by Robert Haas back in Jan 2022:>> I agree ... but I'm also worried about what happens when we have>> multiple table AMs. One can imagine a new table AM that is>> specifically optimized for TOAST which can be used with an existing>> heap table. One can imagine a new table AM for the main table that>> wants to use something different for TOAST. So, I don't think it's>> right to imagine that the choice of TOASTer depends solely on the>> column data type. I'm not really sure how this should work exactly ...>> but it needs careful thought.>This is the most important open question so far to my knowledge. It>was never addressed, it doesn't seem like there is a plan of doing so,>the suggested alternative approach was ignored, nor are there any>strong arguments that would defend this design choice and/or criticize>the alternative one (other than general words \"don't worry we know>what we are doing\").>This what I mean by the community feedback being discarded.Maybe there was some misunderstanding, I was new to this project andcompany at that time - I was introduced to is in the middle of December2021, but  Theodor Sigaev gave an answer to Mr. Haas:>Right. that's why we propose a validate method  (may be, it's a wrong>name, but I don't known better one) which accepts several arguments, one>of which is table AM oid. If that method returns false then toaster>isn't useful with current TAM, storage or/and compression kinds, etc.And this is generalized and correct from the OOP POV mean to provide away to ensure that this concrete TOAST implementation is valid for Table AMcalling it.On Mon, Oct 24, 2022 at 4:53 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Using Table AM Routine and routing AM methods calls via it is a topic for further discussion,\n> if Pluggable TOAST will be committed. [...] And even then it would be an open issue.\n\n From personal experience with the project I have serious doubts this\nis going to happen. Before such invasive changes are going to be\naccepted there should be a clear understanding of how exactly TOASTers\nare supposed to be used. This should be part of the documentation in\nthe patchset. Additionally there should be an open-soruce or\nsource-available extension that actually demonstrates the benefits of\nTOASTers with reproducible benchmarks (we didn't even get to that part\nyet).\n\n> TOAST implementation is not necessary for Table AM.\n\nWhat other use cases for TOAST do you have in mind?\n\n>> > Have I answered your question? Please don't hesitate to point to any unclear\n>> > parts, I'd be glad to explain that.\n>>\n>> No. To be honest, it looks like you are merely discarding most/any\n>> feedback the community provided so far.\n>>\n>> I really think that pluggable TOASTers would be a great feature.\n>> However if the goal is to get it into the core I doubt that we are\n>> going to make much progress with the current approach.\n\nTo clarify, the concern about \"N TOASTers vs M TableAM\" was expressed\nby Robert Haas back in Jan 2022:\n\n> I agree ... but I'm also worried about what happens when we have\n> multiple table AMs. One can imagine a new table AM that is\n> specifically optimized for TOAST which can be used with an existing\n> heap table. One can imagine a new table AM for the main table that\n> wants to use something different for TOAST. So, I don't think it's\n> right to imagine that the choice of TOASTer depends solely on the\n> column data type. I'm not really sure how this should work exactly ...\n> but it needs careful thought.\n\nThis is the most important open question so far to my knowledge. It\nwas never addressed, it doesn't seem like there is a plan of doing so,\nthe suggested alternative approach was ignored, nor are there any\nstrong arguments that would defend this design choice and/or criticize\nthe alternative one (other than general words \"don't worry we know\nwhat we are doing\").\n\nThis what I mean by the community feedback being discarded.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 24 Oct 2022 17:44:35 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> > > TOAST implementation is not necessary for Table AM.\n>\n> >What other use cases for TOAST do you have in mind?\n>\n> The main use case is the same as for the TOAST mechanism - storing and retrieving\n> oversized data. But we expanded this case with some details -\n> - update TOASTed data (yes, current TOAST implementation cannot update stored\n> data - is marks whole TOASTED object as dead and stores new one);\n> - retrieve part of the stored data chunks without fully de-TOASTing stored data (even\n> with existing TOAST this will be painful if you have to get just a small part of the several\n> hundreds Mb sized object);\n> - be able to store objects of size larger than 1 Gb;\n> - store more than 4 Tb of TOASTed data for one table;\n> - optimize storage for fast search and retrieval of parts of TOASTed object - this is\n> must-have for effectively using JSON, PostgreSQL already is in catching-up position\n> in JSON performance field.\n\nI see. Actually most of this is what TableAM does. We just happened to\ngive this part of TableAM a separate name. The only exception is the\nlast case, when you create custom TOASTers for particular types and\npotentially specify them for the given column.\n\nAll in all, this makes sense.\n\n> Right. that's why we propose a validate method (may be, it's a wrong\n> name, but I don't known better one) which accepts several arguments, one\n> of which is table AM oid. If that method returns false then toaster\n> isn't useful with current TAM, storage or/and compression kinds, etc.\n\nOK, I missed this message. So there was some misunderstanding after\nall, sorry for this.\n\nThat's exactly what I wanted to know. It's much better than allowing\nany TOASTer to run on top of any TableAM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 24 Oct 2022 18:37:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nAleksander, thanks for the discussion! It seems to me that I have to add\nsome parts\nof it to API documentation, to clarify the details on API purpose and\nuse-cases.\n\nOn Mon, Oct 24, 2022 at 6:37 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > > > TOAST implementation is not necessary for Table AM.\n> >\n> > >What other use cases for TOAST do you have in mind?\n> >\n> > The main use case is the same as for the TOAST mechanism - storing and\n> retrieving\n> > oversized data. But we expanded this case with some details -\n> > - update TOASTed data (yes, current TOAST implementation cannot update\n> stored\n> > data - is marks whole TOASTED object as dead and stores new one);\n> > - retrieve part of the stored data chunks without fully de-TOASTing\n> stored data (even\n> > with existing TOAST this will be painful if you have to get just a small\n> part of the several\n> > hundreds Mb sized object);\n> > - be able to store objects of size larger than 1 Gb;\n> > - store more than 4 Tb of TOASTed data for one table;\n> > - optimize storage for fast search and retrieval of parts of TOASTed\n> object - this is\n> > must-have for effectively using JSON, PostgreSQL already is in\n> catching-up position\n> > in JSON performance field.\n>\n> I see. Actually most of this is what TableAM does. We just happened to\n> give this part of TableAM a separate name. The only exception is the\n> last case, when you create custom TOASTers for particular types and\n> potentially specify them for the given column.\n>\n> All in all, this makes sense.\n>\n> > Right. that's why we propose a validate method (may be, it's a wrong\n> > name, but I don't known better one) which accepts several arguments, one\n> > of which is table AM oid. If that method returns false then toaster\n> > isn't useful with current TAM, storage or/and compression kinds, etc.\n>\n> OK, I missed this message. So there was some misunderstanding after\n> all, sorry for this.\n>\n> That's exactly what I wanted to know. It's much better than allowing\n> any TOASTer to run on top of any TableAM.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Aleksander, thanks for the discussion! It seems to me that I have to add some partsof it to API documentation, to clarify the details on API purpose and use-cases.On Mon, Oct 24, 2022 at 6:37 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> > > TOAST implementation is not necessary for Table AM.\n>\n> >What other use cases for TOAST do you have in mind?\n>\n> The main use case is the same as for the TOAST mechanism - storing and retrieving\n> oversized data. But we expanded this case with some details -\n> - update TOASTed data (yes, current TOAST implementation cannot update stored\n> data - is marks whole TOASTED object as dead and stores new one);\n> - retrieve part of the stored data chunks without fully de-TOASTing stored data (even\n> with existing TOAST this will be painful if you have to get just a small part of the several\n>  hundreds Mb sized object);\n> - be able to store objects of size larger than 1 Gb;\n> - store more than 4 Tb of TOASTed data for one table;\n> - optimize storage for fast search and retrieval of parts of TOASTed object - this is\n> must-have for effectively using JSON, PostgreSQL already is in catching-up position\n> in JSON performance field.\n\nI see. Actually most of this is what TableAM does. We just happened to\ngive this part of TableAM a separate name. The only exception is the\nlast case, when you create custom TOASTers for particular types and\npotentially specify them for the given column.\n\nAll in all, this makes sense.\n\n> Right. that's why we propose a validate method  (may be, it's a wrong\n> name, but I don't known better one) which accepts several arguments, one\n> of which is table AM oid. If that method returns false then toaster\n> isn't useful with current TAM, storage or/and compression kinds, etc.\n\nOK, I missed this message. So there was some misunderstanding after\nall, sorry for this.\n\nThat's exactly what I wanted to know. It's much better than allowing\nany TOASTer to run on top of any TableAM.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 24 Oct 2022 22:24:58 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Aleksander, thanks for the discussion! It seems to me that I have to add some parts\n> of it to API documentation, to clarify the details on API purpose and use-cases.\n\nI'm in the process of testing and reviewing the v23 patchset. So far I\njust wanted to report that there seems to be certain issues with the\nformatting and/or trailing whitespaces in the patches:\n\n```\n$ git am <v23-0001-toaster-interface.patch\nApplying: Pluggable TOAST API interface along with reference TOAST mechanics\n.git/rebase-apply/patch:1122: indent with spaces.\n TsrRoutine *tsr = makeNode(TsrRoutine);\n.git/rebase-apply/patch:1123: indent with spaces.\n PG_RETURN_POINTER(tsr);\n.git/rebase-apply/patch:6172: new blank line at EOF.\n+\nwarning: 3 lines add whitespace errors.\n\n$ git am <v23-0002-toaster-default.patch\nApplying: Default TOAST re-implemented using Toaster API.\n.git/rebase-apply/patch:1879: indent with spaces.\n if (*value == old_value)\n.git/rebase-apply/patch:1881: indent with spaces.\n return;\n.git/rebase-apply/patch:3069: trailing whitespace.\n * CREATE TOASTER name HANDLER handler_name\n.git/rebase-apply/patch:3603: trailing whitespace.\nfetch_toast_slice(Relation toastrel, Oid valueid,\n.git/rebase-apply/patch:3654: trailing whitespace.\ntoast_fetch_toast_slice(Relation toastrel, Oid valueid,\nwarning: 5 lines add whitespace errors.\n\n$ git am <v23-0003-toaster-docs.patch\nApplying: Pluggable TOAST API documentation package\n.git/rebase-apply/patch:388: tab in indent.\n TsrRoutine *tsrroutine = makeNode(TsrRoutine);\n.git/rebase-apply/patch:389: tab in indent.\n tsrroutine->init = custom_toast_init;\n.git/rebase-apply/patch:390: tab in indent.\n tsrroutine->toast = custom_toast;\n.git/rebase-apply/patch:391: tab in indent.\n tsrroutine->detoast = custom_detoast;\n.git/rebase-apply/patch:392: tab in indent.\n tsrroutine->deltoast = custom_delete_toast;\nwarning: squelched 12 whitespace errors\nwarning: 17 lines add whitespace errors.\n```\n\nPlease don't forget to run `pgindent` before formatting the patches\nwith `git format-patch` next time.\n\nI'm going to submit a more detailed code review soon.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 1 Nov 2022 13:05:36 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Please don't forget to run `pgindent` before formatting the patches\n> with `git format-patch` next time.\n\nThere are also some compiler warnings, please see the attachment.\n\n> I'm going to submit a more detailed code review soon.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 2 Nov 2022 00:15:27 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> I'm going to submit a more detailed code review soon.\n\nAs promised, here is my feedback.\n\n```\n+ Toaster could be assigned to toastable column with expression\n+ <literal>STORAGE external TOASTER\n<replaceable>toaster_name</replaceable></literal>\n```\n\n1.1. Could you please clarify what is the motivation behind such a\nverbose syntax?\n\n```\n+Please note that custom Toasters could be assigned only to External\n+storage type.\n```\n\n1.2. That's odd. TOAST should work for EXTENDED and MAIN storage\nstrategies as well. On top of that, why should custom TOASTers have\nany knowledge of the default four-stage algorithm and the storage\nstrategies? If the storage strategy is actually ignored, it shouldn't\nbe used in the syntax.\n\n```\n+Toasters could use any storage, advanced compression, encryption, and\n```\n\n2. Although it's possible to implement some encryption in a TOASTer I\ndon't think the documentation should advertise this.\n\n```\n+typedef struct TsrRoutine\n+{\n+ NodeTag type;\n+\n+ /* interface functions */\n+ toast_init init;\n+ toast_function toast;\n+ update_toast_function update_toast;\n+ copy_toast_function copy_toast;\n+ detoast_function detoast;\n+ del_toast_function deltoast;\n+ get_vtable_function get_vtable;\n+ toastervalidate_function toastervalidate;\n+} TsrRoutine;\n```\n\n3.1. I believe we should rename this to something like `struct\nToastImpl`. The `Tsr` abbreviation only creates confusion, and this is\nnot a routine.\n3.2. The names of the fields should be made consistent - e.g. if you\nhave update_toast and copy_toast then del_toast should be renamed to\ndelete_toast.\n3.2. Additionally, in some parts of the path del_toast is used, while\nin others - deltoast.\n\n4. The user documentation should have clear answers on the following questions:\n4.1. What will happen if the user tries to delete a TOASTer while\nstill having data that was TOASTed using this TOASTer? Or this is not\nsupported and the TOASTers should exist in the database indefinitely\nafter creation?\n4.2. Is it possible to delete and/or alter the default TOASTer?\n4.3. Please make sure the previously discussed clarification regarding\n\"N TOASTers vs M TableAMs\" and the validation function is explicitly\npresent.\n\n```\nToaster initialization. Initial TOAST table creation and other startup\npreparations.\n```\n\n5.1. The documentation should clarify how many times init() is called\n- is it done once for the TOASTer, once per relation, etc.\n5.2. Why there is no free() method?\n\n```\nUpdates TOASTed value. Returns new TOAST pointer.\n```\n\n6. It's not clear for what reason update_toast() will be executed to\nbegin with. This should be clarified in the documentation. How does it\ndiffer from copy_toast()?\n\n```\nValidates Toaster for given data type, storage and compression options.\n```\n\n7. It should be explicitly stated when validate() is called and what\nhappens depending on the return value.\n\n```\n+Virtual Functions Table API Extension\n```\n\n8. IMO this section does a very poor job in explaining what this is\nfor and when I may want to use this; what particular problem are we\naddressing here?\n\n9. There are typos in the comments and the documentation,\ns/Vaitual/Virtual/, s/vurtual/virtual/ etc. Also there are missing\narticles. Please run your patches through a spellchecker.\n\nI suggest we address this piece of feedback before proceeding further.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 3 Nov 2022 13:38:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nThank you very much for the feedback.\nAnswering accordingly to questions/remarks order:\n\n>I'm in the process of testing and reviewing the v23 patchset. So far I\n>just wanted to report that there seems to be certain issues with the\n>formatting and/or trailing whitespaces in the patches:\n\nAccepted, will be done.\n\n>There are also some compiler warnings, please see the attachment.\n\nThis too. There warnings showed up recently due to fresh changes in macros.\n\n>1.1. Could you please clarify what is the motivation behind such a\n>verbose syntax?\n\nToaster is set for the table column. Each TOASTable column could have\na different Toaster, so column option is the most obvious place to add it.\n\n>1.2. That's odd. TOAST should work for EXTENDED and MAIN storage\n>strategies as well. On top of that, why should custom TOASTers have\n>any knowledge of the default four-stage algorithm and the storage\n>strategies? If the storage strategy is actually ignored, it shouldn't\n>be used in the syntax.\n\nEXTENDED storage strategy means that TOASTed value is compressed\nbefore being TOASTed, so no knowledge of its internals could be of any\nuse. EXTERNAL strategy means that value is being TOASTed in original\nform. Storage strategy is the thing internal to AM used, and TOAST\nmechanics is not meant to interfere with it. Again, STORAGE EXTERNAL\nexplicitly shows that value will be stored out-of-line.\n\n>2. Although it's possible to implement some encryption in a TOASTer I\n>don't think the documentation should advertise this.\n\nIt is a good example of what could the Toaster be responsible for, because\nwe proposed moving compression into Toasters as a TOAST option -\nfor example, being set while initializing the Toaster.\n\n>3.1. I believe we should rename this to something like `struct\n>ToastImpl`. The `Tsr` abbreviation only creates confusion, and this is\n>not a routine.\n\nIt was done similar to Table AM Routine (please check Pluggable\nStorage API), along with some other decisions.\n\n>3.2. The names of the fields should be made consistent - e.g. if you\n>have update_toast and copy_toast then del_toast should be renamed to\n>delete_toast.\n>3.2. Additionally, in some parts of the path del_toast is used, while\n>in others - deltoast.\n\nAccepted, will be fixed in the next rebase.\n\n>4. The user documentation should have clear answers on the following\nquestions:\n>4.1. What will happen if the user tries to delete a TOASTer while\n>still having data that was TOASTed using this TOASTer? Or this is not\n>supported and the TOASTers should exist in the database indefinitely\n>after creation?\n>4.2. Is it possible to delete and/or alter the default TOASTer?\n>4.3. Please make sure the previously discussed clarification regarding\n>\"N TOASTers vs M TableAMs\" and the validation function is explicitly\n>present.\n\nThank you very much for checking the documentation package. These are\nvery good comments, I'll include these topics in the next patchset.\n\n>5.1. The documentation should clarify how many times init() is called\n>- is it done once for the TOASTer, once per relation, etc.\n>5.2. Why there is no free() method?\n\nThese too. About free() method - Toasters are not meant to be deleted,\nwe mentioned this several times. They exist once they are created as long\nas the DB itself. Have I answered your question?\n\n>6. It's not clear for what reason update_toast() will be executed to\n>begin with. This should be clarified in the documentation. How does it\n>differ from copy_toast()?\n\nIt is not clear because current TOAST mechanics does not have UPDATE\nfunctionality - it doesn't actually update TOASTed value, it marks this\nvalue\n\"dead\" and inserts a new one. This is the cause of TOAST tables bloating\nthat is being complained about by many users. Update method is provided\nfor implementation of UPDATE operation.\n\n>7. It should be explicitly stated when validate() is called and what\n>happens depending on the return value.\n\nAccepted, will correct this topic.\n\n>8. IMO this section does a very poor job in explaining what this is\n>for and when I may want to use this; what particular problem are we\n>addressing here?\n\nI already answered this question, maybe the answer was not very clear.\nThis is just an extension entry point, because for some more advanced\nfunctionality existing pre-defined set of methods would be not enough, i.e.\nspecial Toasters for complex datatypes like JSONb, that have complex\ninternal structure and may require additional ways to interact with it along\ntoast/detoast/update/delete.\n\nAlso, I'll check README and documentation for typos.\n\nOn Thu, Nov 3, 2022 at 1:39 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi Nikita,\n>\n> > I'm going to submit a more detailed code review soon.\n>\n> As promised, here is my feedback.\n>\n> ```\n> + Toaster could be assigned to toastable column with expression\n> + <literal>STORAGE external TOASTER\n> <replaceable>toaster_name</replaceable></literal>\n> ```\n>\n> 1.1. Could you please clarify what is the motivation behind such a\n> verbose syntax?\n>\n> ```\n> +Please note that custom Toasters could be assigned only to External\n> +storage type.\n> ```\n>\n> 1.2. That's odd. TOAST should work for EXTENDED and MAIN storage\n> strategies as well. On top of that, why should custom TOASTers have\n> any knowledge of the default four-stage algorithm and the storage\n> strategies? If the storage strategy is actually ignored, it shouldn't\n> be used in the syntax.\n>\n> ```\n> +Toasters could use any storage, advanced compression, encryption, and\n> ```\n>\n> 2. Although it's possible to implement some encryption in a TOASTer I\n> don't think the documentation should advertise this.\n>\n> ```\n> +typedef struct TsrRoutine\n> +{\n> + NodeTag type;\n> +\n> + /* interface functions */\n> + toast_init init;\n> + toast_function toast;\n> + update_toast_function update_toast;\n> + copy_toast_function copy_toast;\n> + detoast_function detoast;\n> + del_toast_function deltoast;\n> + get_vtable_function get_vtable;\n> + toastervalidate_function toastervalidate;\n> +} TsrRoutine;\n> ```\n>\n> 3.1. I believe we should rename this to something like `struct\n> ToastImpl`. The `Tsr` abbreviation only creates confusion, and this is\n> not a routine.\n> 3.2. The names of the fields should be made consistent - e.g. if you\n> have update_toast and copy_toast then del_toast should be renamed to\n> delete_toast.\n> 3.2. Additionally, in some parts of the path del_toast is used, while\n> in others - deltoast.\n>\n> 4. The user documentation should have clear answers on the following\n> questions:\n> 4.1. What will happen if the user tries to delete a TOASTer while\n> still having data that was TOASTed using this TOASTer? Or this is not\n> supported and the TOASTers should exist in the database indefinitely\n> after creation?\n> 4.2. Is it possible to delete and/or alter the default TOASTer?\n> 4.3. Please make sure the previously discussed clarification regarding\n> \"N TOASTers vs M TableAMs\" and the validation function is explicitly\n> present.\n>\n> ```\n> Toaster initialization. Initial TOAST table creation and other startup\n> preparations.\n> ```\n>\n> 5.1. The documentation should clarify how many times init() is called\n> - is it done once for the TOASTer, once per relation, etc.\n> 5.2. Why there is no free() method?\n>\n> ```\n> Updates TOASTed value. Returns new TOAST pointer.\n> ```\n>\n> 6. It's not clear for what reason update_toast() will be executed to\n> begin with. This should be clarified in the documentation. How does it\n> differ from copy_toast()?\n>\n> ```\n> Validates Toaster for given data type, storage and compression options.\n> ```\n>\n> 7. It should be explicitly stated when validate() is called and what\n> happens depending on the return value.\n>\n> ```\n> +Virtual Functions Table API Extension\n> ```\n>\n> 8. IMO this section does a very poor job in explaining what this is\n> for and when I may want to use this; what particular problem are we\n> addressing here?\n>\n> 9. There are typos in the comments and the documentation,\n> s/Vaitual/Virtual/, s/vurtual/virtual/ etc. Also there are missing\n> articles. Please run your patches through a spellchecker.\n>\n> I suggest we address this piece of feedback before proceeding further.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Thank you very much for the feedback.Answering accordingly to questions/remarks order: >I'm in the process of testing and reviewing the v23 patchset. So far I>just wanted to report that there seems to be certain issues with the>formatting and/or trailing whitespaces in the patches:Accepted, will be done.>There are also some compiler warnings, please see the attachment.This too. There warnings showed up recently due to fresh changes in macros.>1.1. Could you please clarify what is the motivation behind such a>verbose syntax?Toaster is set for the table column. Each TOASTable column could havea different Toaster, so column option is the most obvious place to add it.>1.2. That's odd. TOAST should work for EXTENDED and MAIN storage>strategies as well. On top of that, why should custom TOASTers have>any knowledge of the default four-stage algorithm and the storage>strategies? If the storage strategy is actually ignored, it shouldn't>be used in the syntax.EXTENDED storage strategy means that TOASTed value is compressedbefore being TOASTed, so no knowledge of its internals could be of anyuse. EXTERNAL strategy means that value is being TOASTed in originalform.  Storage strategy is the thing internal to AM used, and TOASTmechanics is not meant to interfere with it. Again, STORAGE EXTERNALexplicitly shows that value will be stored out-of-line.>2. Although it's possible to implement some encryption in a TOASTer I>don't think the documentation should advertise this.It is a good example of what could the Toaster be responsible for, becausewe proposed moving compression into Toasters as a TOAST option -for example, being set while initializing the Toaster.>3.1. I believe we should rename this to something like `struct>ToastImpl`. The `Tsr` abbreviation only creates confusion, and this is>not a routine.It was done similar to Table AM Routine (please check PluggableStorage API), along with some other decisions.>3.2. The names of the fields should be made consistent - e.g. if you>have update_toast and copy_toast then del_toast should be renamed to>delete_toast.>3.2. Additionally, in some parts of the path del_toast is used, while>in others - deltoast.Accepted, will be fixed in the next rebase.>4. The user documentation should have clear answers on the following questions:>4.1. What will happen if the user tries to delete a TOASTer while>still having data that was TOASTed using this TOASTer? Or this is not>supported and the TOASTers should exist in the database indefinitely>after creation?>4.2. Is it possible to delete and/or alter the default TOASTer?>4.3. Please make sure the previously discussed clarification regarding>\"N TOASTers vs M TableAMs\" and the validation function is explicitly>present.Thank you very much for checking the documentation package. These arevery good comments, I'll include these topics in the next patchset.>5.1. The documentation should clarify how many times init() is called>- is it done once for the TOASTer, once per relation, etc.>5.2. Why there is no free() method?These too. About free() method - Toasters are not meant to be deleted,we mentioned this several times. They exist once they are created as longas the DB itself. Have I answered your question?>6. It's not clear for what reason update_toast() will be executed to>begin with. This should be clarified in the documentation. How does it>differ from copy_toast()?It is not clear because current TOAST mechanics does not have UPDATEfunctionality - it doesn't actually update TOASTed value, it marks this value\"dead\" and inserts a new one. This is the cause of TOAST tables bloatingthat is being complained about by many users. Update method is providedfor implementation of UPDATE operation.>7. It should be explicitly stated when validate() is called and what>happens depending on the return value.Accepted, will correct this topic.>8. IMO this section does a very poor job in explaining what this is>for and when I may want to use this; what particular problem are we>addressing here?I already answered this question, maybe the answer was not very clear.This is just an extension entry point, because for some more advancedfunctionality existing pre-defined set of methods would be not enough, i.e.special Toasters for complex datatypes like JSONb, that have complexinternal structure and may require additional ways to interact with it alongtoast/detoast/update/delete.Also, I'll check README and documentation for typos.On Thu, Nov 3, 2022 at 1:39 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> I'm going to submit a more detailed code review soon.\n\nAs promised, here is my feedback.\n\n```\n+      Toaster could be assigned to toastable column with expression\n+      <literal>STORAGE external TOASTER\n<replaceable>toaster_name</replaceable></literal>\n```\n\n1.1. Could you please clarify what is the motivation behind such a\nverbose syntax?\n\n```\n+Please note that custom Toasters could be assigned only to External\n+storage type.\n```\n\n1.2. That's odd. TOAST should work for EXTENDED and MAIN storage\nstrategies as well. On top of that, why should custom TOASTers have\nany knowledge of the default four-stage algorithm and the storage\nstrategies? If the storage strategy is actually ignored, it shouldn't\nbe used in the syntax.\n\n```\n+Toasters could use any storage, advanced compression, encryption, and\n```\n\n2. Although it's possible to implement some encryption in a TOASTer I\ndon't think the documentation should advertise this.\n\n```\n+typedef struct TsrRoutine\n+{\n+    NodeTag        type;\n+\n+    /* interface functions */\n+    toast_init init;\n+    toast_function toast;\n+    update_toast_function update_toast;\n+    copy_toast_function copy_toast;\n+    detoast_function detoast;\n+    del_toast_function deltoast;\n+    get_vtable_function get_vtable;\n+    toastervalidate_function toastervalidate;\n+} TsrRoutine;\n```\n\n3.1. I believe we should rename this to something like `struct\nToastImpl`. The `Tsr` abbreviation only creates confusion, and this is\nnot a routine.\n3.2. The names of the fields should be made consistent - e.g. if you\nhave update_toast and copy_toast then del_toast should be renamed to\ndelete_toast.\n3.2. Additionally, in some parts of the path del_toast is used, while\nin others - deltoast.\n\n4. The user documentation should have clear answers on the following questions:\n4.1. What will happen if the user tries to delete a TOASTer while\nstill having data that was TOASTed using this TOASTer? Or this is not\nsupported and the TOASTers should exist in the database indefinitely\nafter creation?\n4.2. Is it possible to delete and/or alter the default TOASTer?\n4.3. Please make sure the previously discussed clarification regarding\n\"N TOASTers vs M TableAMs\" and the validation function is explicitly\npresent.\n\n```\nToaster initialization. Initial TOAST table creation and other startup\npreparations.\n```\n\n5.1. The documentation should clarify how many times init() is called\n- is it done once for the TOASTer, once per relation, etc.\n5.2. Why there is no free() method?\n\n```\nUpdates TOASTed value. Returns new TOAST pointer.\n```\n\n6. It's not clear for what reason update_toast() will be executed to\nbegin with. This should be clarified in the documentation. How does it\ndiffer from copy_toast()?\n\n```\nValidates Toaster for given data type, storage and compression options.\n```\n\n7. It should be explicitly stated when validate() is called and what\nhappens depending on the return value.\n\n```\n+Virtual Functions Table API Extension\n```\n\n8. IMO this section does a very poor job in explaining what this is\nfor and when I may want to use this; what particular problem are we\naddressing here?\n\n9. There are typos in the comments and the documentation,\ns/Vaitual/Virtual/, s/vurtual/virtual/ etc. Also there are missing\narticles. Please run your patches through a spellchecker.\n\nI suggest we address this piece of feedback before proceeding further.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 3 Nov 2022 14:33:47 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\nPlease, avoid top-posting [1].\n\n> Toaster is set for the table column. Each TOASTable column could have\n> a different Toaster, so column option is the most obvious place to add it.\n\nThis is a major limitation. IMO the user should be able to set a\ncustom TOASTer for the entire table as well. Ideally - for the entire\ndatabase too. This could be implemented entirely on the syntax level,\nthe internals of the patch are not going to be affected.\n\n> >1.2. That's odd. TOAST should work for EXTENDED and MAIN storage\n> >strategies as well. On top of that, why should custom TOASTers have\n> >any knowledge of the default four-stage algorithm and the storage\n> >strategies? If the storage strategy is actually ignored, it shouldn't\n> >be used in the syntax.\n>\n> EXTENDED storage strategy means that TOASTed value is compressed\n> before being TOASTed, so no knowledge of its internals could be of any\n> use. EXTERNAL strategy means that value is being TOASTed in original\n> form. Storage strategy is the thing internal to AM used, and TOAST\n> mechanics is not meant to interfere with it. Again, STORAGE EXTERNAL\n> explicitly shows that value will be stored out-of-line.\n\nLet me rephrase. Will the custom TOASTers work only for EXTERNAL\nstorage strategy or this is just a syntax?\n\n> >2. Although it's possible to implement some encryption in a TOASTer I\n> >don't think the documentation should advertise this.\n>\n> It is a good example of what could the Toaster be responsible for\n\nNo, encryption is an excellent example of what a TOASTer should NOT\ndo. If you are interested in encryption consider joining the \"Moving\nforward with TDE\" thread [2].\n\n> >3.1. I believe we should rename this to something like `struct\n> >ToastImpl`. The `Tsr` abbreviation only creates confusion, and this is\n> >not a routine.\n>\n> It was done similar to Table AM Routine (please check Pluggable\n> Storage API), along with some other decisions.\n\nOK, then maybe we shall keep the \"Routine\" part for consistency. I\nstill don't like the \"Tsr\" abbreviation though and find it confusing.\n\n> It is not clear because current TOAST mechanics does not have UPDATE\n> functionality - it doesn't actually update TOASTed value, it marks this value\n> \"dead\" and inserts a new one. This is the cause of TOAST tables bloating\n> that is being complained about by many users. Update method is provided\n> for implementation of UPDATE operation.\n\nBut should we really distinguish INSERT and UPDATE cases on this API\nlevel? It seems to me that TableAM just inserts new tuples. It's\nTOASTers job to figure out whether similar values existed before and\nshould or shouldn't be reused. Additionally a particular TOASTer can\nreuse old values between _different_ rows, potentially even from\ndifferent tables. Another reason why in practice there is little use\nof knowing whether the data is INSERTed or UPDATEd.\n\n> I already answered this question, maybe the answer was not very clear.\n> This is just an extension entry point, because for some more advanced\n> functionality existing pre-defined set of methods would be not enough, i.e.\n> special Toasters for complex datatypes like JSONb, that have complex\n> internal structure and may require additional ways to interact with it along\n> toast/detoast/update/delete.\n\nMaybe so, but it doesn't change the fact that the user documentation\nshould clearly describe the interface and its usage.\n\n> These too. About free() method - Toasters are not meant to be deleted,\n> we mentioned this several times. They exist once they are created as long\n> as the DB itself. Have I answered your question?\n\nUsers should be able to DROP extension. I seriously doubt that the\npatch is going to be accepted as long as it has this limitation.\n\n[1]: https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n[2]: https://www.postgresql.org/message-id/flat/CAOxo6XJh95xPOpvTxuP_kiGRs8eHcaNrmy3kkzWrzWxvyVkKkQ%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 3 Nov 2022 16:26:38 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nSetting TOAST for table and database is a subject for discussion. There is\nalready default\nToaster. Also, there is not much sense in setting Jsonb Toaster as\ndefault even for table, do\nnot say database, because table could contain other TOASTable columns not\nof Json type.\n\nTo be able to set custom Toaster as default for table you have to make it\nwork with ALL\nTOASTable datatypes - which leads to lots and lots lines of code,\ncomplexity and difficulties\nsupporting such custom Toaster. Custom Toasters are meant to be rather\nsmall and have\nspecialty in some tricky datatypes or workflow.\n\nCustom Toasters will work with Extended storage, but as I answered in\nprevious email -\nthere is no much use of it, because they would deal with compressed data.\n\n>No, encryption is an excellent example of what a TOASTer should NOT\n>do. If you are interested in encryption consider joining the \"Moving\n>forward with TDE\" thread [2].\n\nI'm not working with encryption, so maybe it is really out of scope\nexample. Anyway,\ncompression and dealing with data with known internal structure or some\nspecial\nrequirements lile geometric data in PostGIS - for example, custom PostGIS\nToaster gives\nconsiderable performance boost.\n\n>But should we really distinguish INSERT and UPDATE cases on this API\n>level? It seems to me that TableAM just inserts new tuples. It's\n>TOASTers job to figure out whether similar values existed before and\n>should or shouldn't be reused. Additionally a particular TOASTer can\n>reuse old values between _different_ rows, potentially even from\n>different tables. Another reason why in practice there is little use\n>of knowing whether the data is INSERTed or UPDATEd.\n\nFor TOASTer you SHOULD distinguish insert and update operations, really.\nBecause for\nTOASTed data these operations affect many tuples, and AM does know which of\nthem\nwere updated and which were not - that's very serious limitation of current\nTOAST, and\nTOAST mechanics shoud care about updating affected tuples only instead of\nmarking\nwhole record dead and inserting new one. This is also an argument for not\nusing EXTENDED\nstorage mode - because with compressed data you do not have such choice,\nyou should\ndrop the whole record.\n\nCorrectly implemented UPDATE for TOAST boosts performance and considerably\ndecreases size of TOAST tables along with WAL size. This is not a question,\nan UPDATE\noperation for TOASTed data is a must - consider updating 1 Gb TOASTed\nrecord - with\ncurrent TOAST you would finish having 2 1 Gb records in a table, one of\nthem dead, and\n2 Gb in WAL. With update you would have the same 1 Gb record and only\nupdate diff in WAL.\n\n>Users should be able to DROP extension. I seriously doubt that the\n>patch is going to be accepted as long as it has this limitation.\n\nThere is a mention in documentation and previous discussion that this\noperation would lead\nto loss of data TOASTed with this custom Toaster. It was stated as an issue\nand subject for\nfurther duscucssion in previous emails.\n\n>\n> --\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Setting TOAST for table and database is a subject for discussion. There is already defaultToaster. Also, there is not much sense in setting Jsonb Toaster as default even for table, donot say database, because table could contain other TOASTable columns not of Json type.To be able to set custom Toaster as default for table you have to make it work with ALL TOASTable datatypes - which leads to lots and lots lines of code, complexity and difficultiessupporting such custom Toaster. Custom Toasters are meant to be rather small and havespecialty in some tricky datatypes or workflow.Custom Toasters will work with Extended storage, but as I answered in previous email -there is no much use of it, because they would deal with compressed data.>No, encryption is an excellent example of what a TOASTer should NOT>do. If you are interested in encryption consider joining the \"Moving>forward with TDE\" thread [2].I'm not working with encryption, so maybe it is really out of scope example. Anyway,compression and dealing with data with known internal structure or some specialrequirements lile geometric data in PostGIS - for example, custom PostGIS Toaster givesconsiderable performance boost.>But should we really distinguish INSERT and UPDATE cases on this API>level? It seems to me that TableAM just inserts new tuples. It's>TOASTers job to figure out whether similar values existed before and>should or shouldn't be reused. Additionally a particular TOASTer can>reuse old values between _different_ rows, potentially even from>different tables. Another reason why in practice there is little use>of knowing whether the data is INSERTed or UPDATEd.For TOASTer you SHOULD distinguish insert and update operations, really. Because forTOASTed data these operations affect many tuples, and AM does know which of them were updated and which were not - that's very serious limitation of current TOAST, andTOAST mechanics shoud care about updating affected tuples only instead of markingwhole record dead and inserting new one. This is also an argument for not using EXTENDEDstorage mode - because with compressed data you do not have such choice, you shoulddrop the whole record.Correctly implemented UPDATE for TOAST boosts performance and considerablydecreases size of TOAST tables along with WAL size. This is not a question, an UPDATEoperation for TOASTed data is a must - consider updating 1 Gb TOASTed record - withcurrent TOAST you would finish having 2 1 Gb records in a table, one of them dead, and2 Gb in WAL. With update you would have the same 1 Gb record and only update diff in WAL.>Users should be able to DROP extension. I seriously doubt that the>patch is going to be accepted as long as it has this limitation.There is a mention in documentation and previous discussion that this operation would leadto loss of data TOASTed with this custom Toaster. It was stated as an issue and subject forfurther duscucssion in previous emails.-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 3 Nov 2022 17:30:02 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Setting TOAST for table and database is a subject for discussion. There is already default\n> Toaster. Also, there is not much sense in setting Jsonb Toaster as default even for table, do\n> not say database, because table could contain other TOASTable columns not of Json type.\n\nTrue, but besides Jsonb Toaster one can implement a universal one as\nwell (your own example: encryption, or a Toaster that bypasses a 1 GB\nvalue limitation). However we can probably keep this out of scope of\nthis particular patchset for now. As mentioned before this is going to\nbe just an additional syntax for the user convenience.\n\n> Custom Toasters will work with Extended storage, but as I answered in previous email -\n> there is no much use of it, because they would deal with compressed data.\n\nCompression is actually a part of the four-stage TOAST algorithm. So\nwhat you're doing is using the default TOAST most of the time, but if\nthe storage strategy is EXTERNAL and a custom TOASTer is specified\nthen you use a type-aware \"TOASTer\".\n\nIf the goal is to implement true \"Pluggable TOASTers\" then the TOAST\nshould be substituted entirely. If the goal is to implement type-aware\nsub-TOASTers we should be honest about this and call it otherwise:\ne.g. \"Type-aware TOASTer\" or perhaps \"subTOASTer\". Additionally in\nthis case there should be no validate() method since this is going to\nwork only with the default heapam that implements the default TOASTer.\n\nSo to clarify, the goal is to deliver true \"Pluggable TOASTers\" or\nonly \"type-aware sub-TOASTers\"?\n\n> For TOASTer you SHOULD distinguish insert and update operations, really. Because for\n> TOASTed data these operations affect many tuples, and AM does know which of them\n> were updated and which were not - that's very serious limitation of current TOAST, and\n> TOAST mechanics shoud care about updating affected tuples only instead of marking\n> whole record dead and inserting new one. This is also an argument for not using EXTENDED\n> storage mode - because with compressed data you do not have such choice, you should\n> drop the whole record.\n\nThis may actually be a good point. I suggest reflecting it in the documentation.\n\n> >Users should be able to DROP extension. I seriously doubt that the\n> >patch is going to be accepted as long as it has this limitation.\n>\n> There is a mention in documentation and previous discussion that this operation would lead\n> to loss of data TOASTed with this custom Toaster.\n\nI don't see any reason why the semantics for Toasters should be any\ndifferent from user-defined types implemented in an extension. If\nthere are columns that use a given Toaster we should forbid DROPping\nthe extension. Otherwise \"DROP extension\" should succeed. No one says\nthat this should be a fast operation but it should be possible.\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22=5xVBg7S4vr5rQ@mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 4 Nov 2022 12:25:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi again,\n\n> [1]: https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22=5xVBg7S4vr5rQ@mail.gmail.com\n\nI added a link but forgot to use it, d'oh!\n\nPlease note that if the goal is to merely implement \"type-aware\nsub-TOASTers\" then compression dictionaries [1] arguably provide the\nsame value with MUCH less complexity.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 4 Nov 2022 12:35:18 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nPluggable TOAST is provided as an interface to allow developers to plug\nin custom TOAST mechanics. It does not determines would it be universal\nToaster or one data type, but syntax for universal Toaster is out of scope\nfor this patchset.\n\n>True, but besides Jsonb Toaster one can implement a universal one as\n>well (your own example: encryption, or a Toaster that bypasses a 1 GB\n>value limitation). However we can probably keep this out of scope of\n>this particular patchset for now. As mentioned before this is going to\n>be just an additional syntax for the user convenience.\n\nTo transparently bypass the 1 Gb limit you have to increase size of data\nVARLENA type is able to hold. This is out if scope for this patchset too,\nbut as I mentioned before, there are means to do this with Pluggable\nTOAST using toast and detoast iterators.\n\n>Compression is actually a part of the four-stage TOAST algorithm. So\n>what you're doing is using the default TOAST most of the time, but if\n>the storage strategy is EXTERNAL and a custom TOASTer is specified\n>then you use a type-aware \"TOASTer\".\n\nWe provide several custom Toasters for particular types of data causing\na lot of problems in storage. The main idea behind Pluggable TOAST is\nusing special data-aware Toasters where it is needed.\nExtended storage mode supports only 2 compression algorithms, though\nthere are more efficient ones for different types of data.\n\n>If the goal is to implement true \"Pluggable TOASTers\" then the TOAST\n>should be substituted entirely. If the goal is to implement type-aware\n>sub-TOASTers we should be honest about this and call it otherwise:\n>e.g. \"Type-aware TOASTer\" or perhaps \"subTOASTer\". Additionally in\n>this case there should be no validate() method since this is going to\n>work only with the default heapam that implements the default TOASTer.\n\n>So to clarify, the goal is to deliver true \"Pluggable TOASTers\" or\n>only \"type-aware sub-TOASTers\"?\n\nPluggable TOAST does not supposes complete substitution of existing\nTOAST mechanics - this is out of scope for this patchset. It proposes\nmeans to substitute it or plug in additional custom TOAST mechanics\nfor particular data types.\n\n>I don't see any reason why the semantics for Toasters should be any\n>different from user-defined types implemented in an extension. If\n>there are columns that use a given Toaster we should forbid DROPping\n>the extension. Otherwise \"DROP extension\" should succeed. No one says\n>that this should be a fast operation but it should be possible.\n\nI'm currently working on a revision of Pluggable TOAST that would make\ndropping Toaster possible if there is no data TOASTed with it, along with\nseveral other major changes. It will be available in this (I hope so) or the\nfollowing, if I won't make it in time, commitfest.\n\nOn Fri, Nov 4, 2022 at 12:35 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi again,\n>\n> > [1]:\n> https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22=5xVBg7S4vr5rQ@mail.gmail.com\n>\n> I added a link but forgot to use it, d'oh!\n>\n> Please note that if the goal is to merely implement \"type-aware\n> sub-TOASTers\" then compression dictionaries [1] arguably provide the\n> same value with MUCH less complexity.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Pluggable TOAST is provided as an interface to allow developers to plugin custom TOAST mechanics. It does not determines would it be universalToaster or one data type, but syntax for universal Toaster is out of scopefor this patchset.>True, but besides Jsonb Toaster one can implement a universal one as>well (your own example: encryption, or a Toaster that bypasses a 1 GB>value limitation). However we can probably keep this out of scope of>this particular patchset for now. As mentioned before this is going to>be just an additional syntax for the user convenience.To transparently bypass the 1 Gb limit you have to increase size of dataVARLENA type is able to hold. This is out if scope for this patchset too,but as I mentioned before, there are means to do this with PluggableTOAST using toast and detoast iterators.>Compression is actually a part of the four-stage TOAST algorithm. So>what you're doing is using the default TOAST most of the time, but if>the storage strategy is EXTERNAL and a custom TOASTer is specified>then you use a type-aware \"TOASTer\".We provide several custom Toasters for particular types of data causinga lot of problems in storage. The main idea behind Pluggable TOAST isusing special data-aware Toasters where it is needed. Extended storage mode supports only 2 compression algorithms, thoughthere are more efficient ones for different types of data.>If the goal is to implement true \"Pluggable TOASTers\" then the TOAST>should be substituted entirely. If the goal is to implement type-aware>sub-TOASTers we should be honest about this and call it otherwise:>e.g. \"Type-aware TOASTer\" or perhaps \"subTOASTer\". Additionally in>this case there should be no validate() method since this is going to>work only with the default heapam that implements the default TOASTer.>So to clarify, the goal is to deliver true \"Pluggable TOASTers\" or>only \"type-aware sub-TOASTers\"?Pluggable TOAST does not supposes complete substitution of existingTOAST mechanics - this is out of scope for this patchset. It proposesmeans to substitute it or plug in additional custom TOAST mechanicsfor particular data types.>I don't see any reason why the semantics for Toasters should be any>different from user-defined types implemented in an extension. If>there are columns that use a given Toaster we should forbid DROPping>the extension. Otherwise \"DROP extension\" should succeed. No one says>that this should be a fast operation but it should be possible.I'm currently working on a revision of Pluggable TOAST that would makedropping Toaster possible if there is no data TOASTed with it, along withseveral other major changes. It will be available in this (I hope so) or thefollowing, if I won't make it in time, commitfest.On Fri, Nov 4, 2022 at 12:35 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi again,\n\n> [1]: https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22=5xVBg7S4vr5rQ@mail.gmail.com\n\nI added a link but forgot to use it, d'oh!\n\nPlease note that if the goal is to merely implement \"type-aware\nsub-TOASTers\" then compression dictionaries [1] arguably provide the\nsame value with MUCH less complexity.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sun, 6 Nov 2022 17:59:13 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> Pluggable TOAST is provided as an interface to allow developers to plug\n> in custom TOAST mechanics. It does not determines would it be universal\n> Toaster or one data type, but syntax for universal Toaster is out of scope\n> for this patchset.\n\nIf I understand correctly, what is going to happen - the same\ninterface TsrRoutine is going to be used for doing two things:\n\n1) Implementing type-aware TOASTers as a special case for the default\nTOAST algorithm when EXTERNAL storage strategy is used.\n2) Implementing universal TOASTers from scratch that have nothing to\ndo with the default TOAST algorithm.\n\nAssuming this is the case, using the same interface for doing two very\ndifferent things doesn't strike me as a great design decision. While\nworking on v24 you may want to rethink this.\n\nPersonally I believe that Pluggable TOASTers should support only case\n#2. If there is a need of reusing parts of the default TOASTer,\ncorresponding pieces of code should be declared as non-static and\ncalled from the pluggable TOASTers directly.\n\nAlternatively we could have separate interfaces for case #1 and case\n#2 but this IMO becomes rather complicated.\n\n> I'm currently working on a revision of Pluggable TOAST that would make\n> dropping Toaster possible if there is no data TOASTed with it, along with\n> several other major changes. It will be available in this (I hope so) or the\n> following, if I won't make it in time, commitfest.\n\nLooking forward to v24!\n\nThis is a major change so I hope there will be more feedback from\nother people on the mailing list.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 7 Nov 2022 13:35:18 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nI want to bump this thread and remind the community about Pluggable TOAST.\nOverall Pluggable TOAST was revised to work without altering PG_ATTRIBUTE\ntable\n- no atttoaster column, allow to drop unused Toasters and some other\nimprovements.\nSorry for the big delay, but it was a big piece of work to do, and the work\nis still going on.\n\nHere are the main highlights:\n\n1) No need to modify the PG_ATTRIBUTE table. We've introduced new catalog\ntable with a set of internal support functions that keeps all table-toaster\nrelations:\npostgres@postgres=# \\d+ pg_toastrel;\n Table \"pg_catalog.pg_toastrel\"\n Column | Type | Collation | Nullable | Default | Storage |\nToaster | Compression | Stats target | Description\n--------------+----------+-----------+----------+---------+---------+---------+-------------+--------------+-------------\n oid | oid | | not null | | plain |\n | | |\n toasteroid | oid | | not null | | plain |\n | | |\n relid | oid | | not null | | plain |\n | | |\n toastentid | oid | | not null | | plain |\n | | |\n attnum | smallint | | not null | | plain |\n | | |\n version | smallint | | not null | | plain |\n | | |\n relname | name | | not null | | plain |\n | | |\n toastentname | name | | not null | | plain |\n | | |\n flag | \"char\" | | not null | | plain |\n | | |\n toastoptions | \"char\" | | not null | | plain |\n | | |\nIndexes:\n \"pg_toastrel_oid_index\" PRIMARY KEY, btree (oid)\n \"pg_toastrel_name_index\" UNIQUE CONSTRAINT, btree (toasteroid, relid,\nversion, attnum)\n \"pg_toastrel_rel_index\" btree (relid, attnum)\n \"pg_toastrel_tsr_index\" btree (toasteroid)\nAccess method: heap\n(This is not final definition)\n\nThis approach allows us to keep all Toaster assignment history, as well as\ncorrectly storing\nToasters assigned to different columns of the relation, and even have\nseparate TOAST\nstorage entities (these not necessary to be regular TOAST tables) for\ndifferent columns.\n\nWhen the table with the TOASTable column is created - a new row is inserted\ninto\nPG_TOASTREL with source table OID, Toaster OID, created TOAST entity OID,\ncolumn\n(attribute) index. Special field \"version\" is used to keep history of\nToasters assigned to\nthe column - it is a counter which increases with each assignment, and the\nbiggest version\nis the current Toaster for the column. All assigned Toasters are\nautomatically cached,\nand when the value is TOASTed - first lookup is done in cache, and if there\nis no cached\nToaster it is searched in PG_TOASTREL and inserted in cache.\n\n2) Attribute \"reltoastrelid\" was replaced with calls of PG_TOASTREL support\nfunctions.\nThis was done to allow each TOASTed column to be assigned with different\nToaster\nand have its individual TOAST table.\n\n3) DROP TABLE command was modified to remove corresponding records from the\nPG_TOASTREL - to allow dropping toasters that are out of use.\n\n4) DROP TOASTER command was introduced. This command allows to drop unused\nToasters - the ones that do not have records in PG_TOASTREL. If the Toaster\nwas\nassigned to a column - it could not be dropped, because all data TOASTed\nwith it will\nbe lost.\n\nThe branch is still in development so I it is too early for patch but\nhere's link to the repo:\nhttps://github.com/postgrespro/postgres/tree/toastapi_with_ctl\n\nOn Mon, Nov 7, 2022 at 1:35 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi Nikita,\n>\n> > Pluggable TOAST is provided as an interface to allow developers to plug\n> > in custom TOAST mechanics. It does not determines would it be universal\n> > Toaster or one data type, but syntax for universal Toaster is out of\n> scope\n> > for this patchset.\n>\n> If I understand correctly, what is going to happen - the same\n> interface TsrRoutine is going to be used for doing two things:\n>\n> 1) Implementing type-aware TOASTers as a special case for the default\n> TOAST algorithm when EXTERNAL storage strategy is used.\n> 2) Implementing universal TOASTers from scratch that have nothing to\n> do with the default TOAST algorithm.\n>\n> Assuming this is the case, using the same interface for doing two very\n> different things doesn't strike me as a great design decision. While\n> working on v24 you may want to rethink this.\n>\n> Personally I believe that Pluggable TOASTers should support only case\n> #2. If there is a need of reusing parts of the default TOASTer,\n> corresponding pieces of code should be declared as non-static and\n> called from the pluggable TOASTers directly.\n>\n> Alternatively we could have separate interfaces for case #1 and case\n> #2 but this IMO becomes rather complicated.\n>\n> > I'm currently working on a revision of Pluggable TOAST that would make\n> > dropping Toaster possible if there is no data TOASTed with it, along with\n> > several other major changes. It will be available in this (I hope so) or\n> the\n> > following, if I won't make it in time, commitfest.\n>\n> Looking forward to v24!\n>\n> This is a major change so I hope there will be more feedback from\n> other people on the mailing list.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!I want to bump this thread and remind the community about Pluggable TOAST.Overall Pluggable TOAST was revised to work without altering PG_ATTRIBUTE table- no atttoaster column, allow to drop unused Toasters and some other improvements.Sorry for the big delay, but it was a big piece of work to do, and the work is still going on.Here are the main highlights:1) No need to modify the PG_ATTRIBUTE table. We've introduced new catalogtable with a set of internal support functions that keeps all table-toaster relations:postgres@postgres=# \\d+ pg_toastrel;                                             Table \"pg_catalog.pg_toastrel\"    Column    |   Type   | Collation | Nullable | Default | Storage | Toaster | Compression | Stats target | Description--------------+----------+-----------+----------+---------+---------+---------+-------------+--------------+------------- oid          | oid      |           | not null |         | plain   |         |             |              | toasteroid   | oid      |           | not null |         | plain   |         |             |              | relid        | oid      |           | not null |         | plain   |         |             |              | toastentid   | oid      |           | not null |         | plain   |         |             |              | attnum       | smallint |           | not null |         | plain   |         |             |              | version      | smallint |           | not null |         | plain   |         |             |              | relname      | name     |           | not null |         | plain   |         |             |              | toastentname | name     |           | not null |         | plain   |         |             |              | flag         | \"char\"   |           | not null |         | plain   |         |             |              | toastoptions | \"char\"   |           | not null |         | plain   |         |             |              |Indexes:    \"pg_toastrel_oid_index\" PRIMARY KEY, btree (oid)    \"pg_toastrel_name_index\" UNIQUE CONSTRAINT, btree (toasteroid, relid, version, attnum)    \"pg_toastrel_rel_index\" btree (relid, attnum)    \"pg_toastrel_tsr_index\" btree (toasteroid)Access method: heap(This is not final definition)This approach allows us to keep all Toaster assignment history, as well as correctly storingToasters assigned to different columns of the relation, and even have separate TOASTstorage entities (these not necessary to be regular TOAST tables) for different columns.When the table with the TOASTable column is created - a new row is inserted intoPG_TOASTREL with source table OID, Toaster OID, created TOAST entity OID, column(attribute) index. Special field \"version\" is used to keep history of Toasters assigned tothe column - it is a counter which increases with each assignment, and the biggest versionis the current Toaster for the column. All assigned Toasters are automatically cached,and when the value is TOASTed - first lookup is done in cache, and if there is no cachedToaster it is searched in PG_TOASTREL and inserted in cache.2) Attribute \"reltoastrelid\" was replaced with calls of PG_TOASTREL support functions.This was done to allow each TOASTed column to be assigned with different Toasterand have its individual TOAST table.3) DROP TABLE command was modified to remove corresponding records from thePG_TOASTREL - to allow dropping toasters that are out of use.4) DROP TOASTER command was introduced. This command allows to drop unusedToasters - the ones that do not have records in PG_TOASTREL. If the Toaster wasassigned to a column - it could not be dropped, because all data TOASTed with it willbe lost.The branch is still in development so I it is too early for patch but here's link to the repo:https://github.com/postgrespro/postgres/tree/toastapi_with_ctlOn Mon, Nov 7, 2022 at 1:35 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Pluggable TOAST is provided as an interface to allow developers to plug\n> in custom TOAST mechanics. It does not determines would it be universal\n> Toaster or one data type, but syntax for universal Toaster is out of scope\n> for this patchset.\n\nIf I understand correctly, what is going to happen - the same\ninterface TsrRoutine is going to be used for doing two things:\n\n1) Implementing type-aware TOASTers as a special case for the default\nTOAST algorithm when EXTERNAL storage strategy is used.\n2) Implementing universal TOASTers from scratch that have nothing to\ndo with the default TOAST algorithm.\n\nAssuming this is the case, using the same interface for doing two very\ndifferent things doesn't strike me as a great design decision. While\nworking on v24 you may want to rethink this.\n\nPersonally I believe that Pluggable TOASTers should support only case\n#2. If there is a need of reusing parts of the default TOASTer,\ncorresponding pieces of code should be declared as non-static and\ncalled from the pluggable TOASTers directly.\n\nAlternatively we could have separate interfaces for case #1 and case\n#2 but this IMO becomes rather complicated.\n\n> I'm currently working on a revision of Pluggable TOAST that would make\n> dropping Toaster possible if there is no data TOASTed with it, along with\n> several other major changes. It will be available in this (I hope so) or the\n> following, if I won't make it in time, commitfest.\n\nLooking forward to v24!\n\nThis is a major change so I hope there will be more feedback from\nother people on the mailing list.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 15 Dec 2022 23:37:15 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\n Pluggable TOAST API with catalog control table PG_TOASTREL - pre-patch.\n\n Pluggable TOAST - TOAST API rework - introduce PG_TOASTREL catalog\n relation containing TOAST dependencies. NOTE: here is a pre-patch, not\n a final version, just to introduce another approach to a Pluggable TOAST\n idea, it needs some cleanup, tests rework and some improvements, so\nthe main\n goal of this message is to introduce this different approach. This is\nthe\n last patch and it is installed on top of older TOAST API patches, so\nhere\n are 3 patches attached:\n\n 0001_toaster_interface_v24.patch.gz\n This patch introduces new custom TOAST pointer, Pluggable TOAST API and\n Toaster support functions - cache, lookup, and new attribute\n'atttoaster'\n in PG_ATTRIBUTE table which stores Toaster OID;\n\n 0002_toaster_default_v24.patch.gz\n Here the default TOAST mechanics is routed via TOAST API, but still\nusing\n varatt_external TOAST Pointer - so this step does not change overall\nTOAST\n mechanics unless you plug in some custom Toaster;\n\n 0003_pg_toastrel_table_v24.patch.gz\n Here Pluggable TOAST is reworked not to modify PG_ATTRIBUTE, instead\nthis\n patch introduces new catalog table PG_TOASTREL with its support\nfunctions.\n\n Motivation: PG_ATTRIBUTE is already the largest catalog table. We try\n to avoid modification of existing catalog tables, and previous solution\n had several problems:\n 1) New field in PG_ATTRIBUTE;\n 2) No opportunity to save all Toaster assignment history;\n 3) No opportunity to have multi-TOAST tables assigned to a relation or\n an attribute;\n 4) Toaster cannot be dropped - to drop Toaster we need to scan all\ntables\n with TOASTable columns.\n\n Instead of extending PG_ATTRIBUTE with ATTTOASTER attribute, we decided\n to store all Table-Toaster relations in a new catalog table PG_TOASTREL.\n This cancels the necessity to modify catalog table PG_ATTRIBUTE, allows\nto store\n full history of Toasters assignments, and allows to drop unused Toasters\n from system.\n\n Toasters are assigned to a table column. ALTER TABLE ... SET TOASTER\ncommand\n creates a new row in PG_TOASTREL. To distinguish sequential assignments,\n PG_TOASTREL has special attribute - 'version'. With each new assignment\n its 'version' attribute is increased, and the row with the biggest\n'version'\n is the current Toaster for a column.\n\n This approach allows to provide different behavior, even for a single\ntable\n we can have one TOAST table for the whole relation (as it is in current\nTOAST\n mechanics), or we can have separate TOAST relation(s) for each TOASTable\n column - this requires a slight modification if current approach. The\nlatter\n also allows simple invariant of column-oriented storage.\n\n Also, this approach makes PG_ATTRIBUTE attribute RELTOASTRELID obsolete\n-\n current mechanics allows only 1 TOAST table for relation, which limits\n greatly TOAST capabilities - because all TOASTed columns are stored in\nthis\n table, which in its turn limits overall base relation capacity.\n\n In future, this approach allows us to have a kind of near infinite TOAST\n storage, with ability to store large values (larger than 512 Mbytes),\n auto-creation of TOAST table only when the first value is actually\nTOASTed,\n and much more.\n\n The approach, along with the TOAST API itself, introduces the catalog\ntable\n PG_TOASTREL with a set of support functions.\n\n PG_TOASTREL definition:\n\n postgres@postgres=# \\d+ pg_toastrel;\n Table\n\"pg_catalog.pg_toastrel\"\n Column | Type | Collation | Nullable | Default | Storage |\nToaster | Compression | Stats target | Description\n\n-------------+----------+-----------+----------+---------+---------+---------+-------------+--------------+-------------\n oid | oid | | not null | | plain |\n | | |\n toasteroid | oid | | not null | | plain |\n | | |\n relid | oid | | not null | | plain |\n | | |\n toastentid | oid | | not null | | plain |\n | | |\n attnum | smallint | | not null | | plain |\n | | |\n version | smallint | | not null | | plain |\n | | |\n relname | name | | not null | | plain |\n | | |\n toastentname | name | | not null | | plain |\n | | |\n flag | \"char\" | | not null | | plain |\n | | |\n toastoptions | \"char\" | | not null | | plain |\n | | |\n Indexes:\n \"pg_toastrel_oid_index\" PRIMARY KEY, btree (oid)\n \"pg_toastrel_name_index\" UNIQUE CONSTRAINT, btree (toasteroid, relid,\nversion, attnum)\n \"pg_toastrel_rel_index\" btree (relid, attnum)\n \"pg_toastrel_tsr_index\" btree (toasteroid)\n Access method: heap\n (This is not a final definition)\n\n Where:\n oid - PG_TOASTREL record ID\n toasteroid - Toaster OID from PG_TOASTER\n relid - base relation OID\n toastentid - TOAST entity OID (not necessary to be a table)\n attnum - TOASTable attribute index in base relation\n version - Toaster assignment version - sequence of assignments\n relname - base relation name (optional)\n toastentname - TOAST entity name (optional)\n flag - special field to mark rows, currently only the value 'x' is used\n to mark unused rows\n\n PG_TOASTREL unique key consists of:\n toasteroid, relid, attnum, version\n\n All currently assigned Toasters are additionally stored in cache for\n fast access. When new row is being TOASTed - Toaster, relation Oid,\n TOAST relation Oid, column index are added into Toastrel Cache for fast\n access.\n\n Create table, change Toaster, change column type were changed to\n add new rows in PG_TOASTREL, to use this table and cache instead\n of altering pg_attribute with new column. For table creation from\n scratch when no TOAST tables were created is used special condition\n with version=0.\n\n DROP TABLE drops rows in PG_TOASTREL for this table. This allows to -\n DROP TOASTER command added. When no rows with the according Toaster are\n present in PG_TOASTREL - it is considered unused and thus could be\nsafely\n dropped from the system.\n\n Default toaster 'deftoaster' (reference TOAST mechanics) cannot be\ndropped.\n\nWorking branch:\nhttps://github.com/postgrespro/postgres/tree/toastapi_with_ctl\n\nWould be glad to get any proposals and objections.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 27 Dec 2022 00:01:54 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Tue, 27 Dec 2022 at 02:32, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi hackers!\n>\n> Pluggable TOAST API with catalog control table PG_TOASTREL - pre-patch.\n>\n> Pluggable TOAST - TOAST API rework - introduce PG_TOASTREL catalog\n> relation containing TOAST dependencies. NOTE: here is a pre-patch, not\n> a final version, just to introduce another approach to a Pluggable TOAST\n> idea, it needs some cleanup, tests rework and some improvements, so\n> the main\n> goal of this message is to introduce this different approach. This is the\n> last patch and it is installed on top of older TOAST API patches, so here\n> are 3 patches attached:\n>\n> 0001_toaster_interface_v24.patch.gz\n> This patch introduces new custom TOAST pointer, Pluggable TOAST API and\n> Toaster support functions - cache, lookup, and new attribute 'atttoaster'\n> in PG_ATTRIBUTE table which stores Toaster OID;\n>\n> 0002_toaster_default_v24.patch.gz\n> Here the default TOAST mechanics is routed via TOAST API, but still using\n> varatt_external TOAST Pointer - so this step does not change overall TOAST\n> mechanics unless you plug in some custom Toaster;\n>\n> 0003_pg_toastrel_table_v24.patch.gz\n> Here Pluggable TOAST is reworked not to modify PG_ATTRIBUTE, instead this\n> patch introduces new catalog table PG_TOASTREL with its support functions.\n>\n> Motivation: PG_ATTRIBUTE is already the largest catalog table. We try\n> to avoid modification of existing catalog tables, and previous solution\n> had several problems:\n> 1) New field in PG_ATTRIBUTE;\n> 2) No opportunity to save all Toaster assignment history;\n> 3) No opportunity to have multi-TOAST tables assigned to a relation or\n> an attribute;\n> 4) Toaster cannot be dropped - to drop Toaster we need to scan all tables\n> with TOASTable columns.\n>\n> Instead of extending PG_ATTRIBUTE with ATTTOASTER attribute, we decided\n> to store all Table-Toaster relations in a new catalog table PG_TOASTREL.\n> This cancels the necessity to modify catalog table PG_ATTRIBUTE, allows to store\n> full history of Toasters assignments, and allows to drop unused Toasters\n> from system.\n>\n> Toasters are assigned to a table column. ALTER TABLE ... SET TOASTER command\n> creates a new row in PG_TOASTREL. To distinguish sequential assignments,\n> PG_TOASTREL has special attribute - 'version'. With each new assignment\n> its 'version' attribute is increased, and the row with the biggest 'version'\n> is the current Toaster for a column.\n>\n> This approach allows to provide different behavior, even for a single table\n> we can have one TOAST table for the whole relation (as it is in current TOAST\n> mechanics), or we can have separate TOAST relation(s) for each TOASTable\n> column - this requires a slight modification if current approach. The latter\n> also allows simple invariant of column-oriented storage.\n>\n> Also, this approach makes PG_ATTRIBUTE attribute RELTOASTRELID obsolete -\n> current mechanics allows only 1 TOAST table for relation, which limits\n> greatly TOAST capabilities - because all TOASTed columns are stored in this\n> table, which in its turn limits overall base relation capacity.\n>\n> In future, this approach allows us to have a kind of near infinite TOAST\n> storage, with ability to store large values (larger than 512 Mbytes),\n> auto-creation of TOAST table only when the first value is actually TOASTed,\n> and much more.\n>\n> The approach, along with the TOAST API itself, introduces the catalog table\n> PG_TOASTREL with a set of support functions.\n>\n> PG_TOASTREL definition:\n>\n> postgres@postgres=# \\d+ pg_toastrel;\n> Table \"pg_catalog.pg_toastrel\"\n> Column | Type | Collation | Nullable | Default | Storage | Toaster | Compression | Stats target | Description\n> -------------+----------+-----------+----------+---------+---------+---------+-------------+--------------+-------------\n> oid | oid | | not null | | plain | | | |\n> toasteroid | oid | | not null | | plain | | | |\n> relid | oid | | not null | | plain | | | |\n> toastentid | oid | | not null | | plain | | | |\n> attnum | smallint | | not null | | plain | | | |\n> version | smallint | | not null | | plain | | | |\n> relname | name | | not null | | plain | | | |\n> toastentname | name | | not null | | plain | | | |\n> flag | \"char\" | | not null | | plain | | | |\n> toastoptions | \"char\" | | not null | | plain | | | |\n> Indexes:\n> \"pg_toastrel_oid_index\" PRIMARY KEY, btree (oid)\n> \"pg_toastrel_name_index\" UNIQUE CONSTRAINT, btree (toasteroid, relid, version, attnum)\n> \"pg_toastrel_rel_index\" btree (relid, attnum)\n> \"pg_toastrel_tsr_index\" btree (toasteroid)\n> Access method: heap\n> (This is not a final definition)\n>\n> Where:\n> oid - PG_TOASTREL record ID\n> toasteroid - Toaster OID from PG_TOASTER\n> relid - base relation OID\n> toastentid - TOAST entity OID (not necessary to be a table)\n> attnum - TOASTable attribute index in base relation\n> version - Toaster assignment version - sequence of assignments\n> relname - base relation name (optional)\n> toastentname - TOAST entity name (optional)\n> flag - special field to mark rows, currently only the value 'x' is used\n> to mark unused rows\n>\n> PG_TOASTREL unique key consists of:\n> toasteroid, relid, attnum, version\n>\n> All currently assigned Toasters are additionally stored in cache for\n> fast access. When new row is being TOASTed - Toaster, relation Oid,\n> TOAST relation Oid, column index are added into Toastrel Cache for fast\n> access.\n>\n> Create table, change Toaster, change column type were changed to\n> add new rows in PG_TOASTREL, to use this table and cache instead\n> of altering pg_attribute with new column. For table creation from\n> scratch when no TOAST tables were created is used special condition\n> with version=0.\n>\n> DROP TABLE drops rows in PG_TOASTREL for this table. This allows to -\n> DROP TOASTER command added. When no rows with the according Toaster are\n> present in PG_TOASTREL - it is considered unused and thus could be safely\n> dropped from the system.\n>\n> Default toaster 'deftoaster' (reference TOAST mechanics) cannot be dropped.\n>\n> Working branch:\n> https://github.com/postgrespro/postgres/tree/toastapi_with_ctl\n>\n> Would be glad to get any proposals and objections.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\npatching file src/backend/utils/cache/syscache.c\n=== Applying patches on top of PostgreSQL commit ID\n33ab0a2a527e3af5beee3a98fc07201e555d6e45 ===\n=== applying patch ./0001_toaster_interface_v24.patch\npatching file contrib/test_decoding/expected/ddl.out\nHunk #2 FAILED at 874.\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/utils/cache/syscache.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3490.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 4 Jan 2023 15:22:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nThank you for your attention.\nI've rebased the patchset onto the latest master (from 07.01), but the\nsecond part is still\nin pre-patch shape - it is working, but tests are failing due to changes in\nTOAST relations\nlogic - I haven't adapted 'em yet.\n\nThe patchset consists of 4 patches:\nThe first part consists of 2 patches and uses pg_attribute extension, this\nwas an arguable\ndecision and it has serious downside, so I've decided to make another\nrevision (the second part):\n0001_toaster_interface_v25.patch - Pluggable TOAST API, reference TOAST\nleft intact\n0002_toaster_default_v25.patch - Reference TOAST is routed via TOAST API\n\nThe second part - TOAST API revision that does not extend pg_attribute, is\nmore flexible\nand allows a lot more extensibility to the TOAST functionality - instead of\nstoring Toaster\nOID in \"atttoaster\" attribute of pg_attribute - we use new special catalog\ntable pg_toastrel,\nwhich keeps all Toaster assignments history (pelase check my message from\n27.12), and\nallows to drop Toasters safely. This part is in pre-patch state, I've send\nit for the review\nand feedback on the general approach:\n0003_pg_toastrel_control_v25.patch - introduces pg_toastrel catalog\nrelation, which stores\nToaster assignment logic;\n0004_drop_toaster_v25.patch - extends SQL syntax with a safe DROP TOASTER\ncommand.\n\n\n\nOn Wed, Jan 4, 2023 at 12:52 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, 27 Dec 2022 at 02:32, Nikita Malakhov <hukutoc@gmail.com> wrote:\n> >\n> > Hi hackers!\n> >\n> > Pluggable TOAST API with catalog control table PG_TOASTREL -\n> pre-patch.\n> >\n> > Pluggable TOAST - TOAST API rework - introduce PG_TOASTREL catalog\n> > relation containing TOAST dependencies. NOTE: here is a pre-patch,\n> not\n> > a final version, just to introduce another approach to a Pluggable\n> TOAST\n> > idea, it needs some cleanup, tests rework and some improvements, so\n> > the main\n> > goal of this message is to introduce this different approach. This\n> is the\n> > last patch and it is installed on top of older TOAST API patches, so\n> here\n> > are 3 patches attached:\n> >\n> > 0001_toaster_interface_v24.patch.gz\n> > This patch introduces new custom TOAST pointer, Pluggable TOAST API\n> and\n> > Toaster support functions - cache, lookup, and new attribute\n> 'atttoaster'\n> > in PG_ATTRIBUTE table which stores Toaster OID;\n> >\n> > 0002_toaster_default_v24.patch.gz\n> > Here the default TOAST mechanics is routed via TOAST API, but still\n> using\n> > varatt_external TOAST Pointer - so this step does not change overall\n> TOAST\n> > mechanics unless you plug in some custom Toaster;\n> >\n> > 0003_pg_toastrel_table_v24.patch.gz\n> > Here Pluggable TOAST is reworked not to modify PG_ATTRIBUTE, instead\n> this\n> > patch introduces new catalog table PG_TOASTREL with its support\n> functions.\n> >\n> > Motivation: PG_ATTRIBUTE is already the largest catalog table. We try\n> > to avoid modification of existing catalog tables, and previous\n> solution\n> > had several problems:\n> > 1) New field in PG_ATTRIBUTE;\n> > 2) No opportunity to save all Toaster assignment history;\n> > 3) No opportunity to have multi-TOAST tables assigned to a relation\n> or\n> > an attribute;\n> > 4) Toaster cannot be dropped - to drop Toaster we need to scan all\n> tables\n> > with TOASTable columns.\n> >\n> > Instead of extending PG_ATTRIBUTE with ATTTOASTER attribute, we\n> decided\n> > to store all Table-Toaster relations in a new catalog table\n> PG_TOASTREL.\n> > This cancels the necessity to modify catalog table PG_ATTRIBUTE,\n> allows to store\n> > full history of Toasters assignments, and allows to drop unused\n> Toasters\n> > from system.\n> >\n> > Toasters are assigned to a table column. ALTER TABLE ... SET TOASTER\n> command\n> > creates a new row in PG_TOASTREL. To distinguish sequential\n> assignments,\n> > PG_TOASTREL has special attribute - 'version'. With each new\n> assignment\n> > its 'version' attribute is increased, and the row with the biggest\n> 'version'\n> > is the current Toaster for a column.\n> >\n> > This approach allows to provide different behavior, even for a\n> single table\n> > we can have one TOAST table for the whole relation (as it is in\n> current TOAST\n> > mechanics), or we can have separate TOAST relation(s) for each\n> TOASTable\n> > column - this requires a slight modification if current approach.\n> The latter\n> > also allows simple invariant of column-oriented storage.\n> >\n> > Also, this approach makes PG_ATTRIBUTE attribute RELTOASTRELID\n> obsolete -\n> > current mechanics allows only 1 TOAST table for relation, which\n> limits\n> > greatly TOAST capabilities - because all TOASTed columns are stored\n> in this\n> > table, which in its turn limits overall base relation capacity.\n> >\n> > In future, this approach allows us to have a kind of near infinite\n> TOAST\n> > storage, with ability to store large values (larger than 512 Mbytes),\n> > auto-creation of TOAST table only when the first value is actually\n> TOASTed,\n> > and much more.\n> >\n> > The approach, along with the TOAST API itself, introduces the\n> catalog table\n> > PG_TOASTREL with a set of support functions.\n> >\n> > PG_TOASTREL definition:\n> >\n> > postgres@postgres=# \\d+ pg_toastrel;\n> > Table\n> \"pg_catalog.pg_toastrel\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> Toaster | Compression | Stats target | Description\n> >\n> -------------+----------+-----------+----------+---------+---------+---------+-------------+--------------+-------------\n> > oid | oid | | not null | | plain\n> | | | |\n> > toasteroid | oid | | not null | | plain\n> | | | |\n> > relid | oid | | not null | | plain\n> | | | |\n> > toastentid | oid | | not null | | plain\n> | | | |\n> > attnum | smallint | | not null | | plain\n> | | | |\n> > version | smallint | | not null | | plain\n> | | | |\n> > relname | name | | not null | | plain\n> | | | |\n> > toastentname | name | | not null | | plain\n> | | | |\n> > flag | \"char\" | | not null | | plain\n> | | | |\n> > toastoptions | \"char\" | | not null | | plain\n> | | | |\n> > Indexes:\n> > \"pg_toastrel_oid_index\" PRIMARY KEY, btree (oid)\n> > \"pg_toastrel_name_index\" UNIQUE CONSTRAINT, btree (toasteroid,\n> relid, version, attnum)\n> > \"pg_toastrel_rel_index\" btree (relid, attnum)\n> > \"pg_toastrel_tsr_index\" btree (toasteroid)\n> > Access method: heap\n> > (This is not a final definition)\n> >\n> > Where:\n> > oid - PG_TOASTREL record ID\n> > toasteroid - Toaster OID from PG_TOASTER\n> > relid - base relation OID\n> > toastentid - TOAST entity OID (not necessary to be a table)\n> > attnum - TOASTable attribute index in base relation\n> > version - Toaster assignment version - sequence of assignments\n> > relname - base relation name (optional)\n> > toastentname - TOAST entity name (optional)\n> > flag - special field to mark rows, currently only the value 'x' is\n> used\n> > to mark unused rows\n> >\n> > PG_TOASTREL unique key consists of:\n> > toasteroid, relid, attnum, version\n> >\n> > All currently assigned Toasters are additionally stored in cache for\n> > fast access. When new row is being TOASTed - Toaster, relation Oid,\n> > TOAST relation Oid, column index are added into Toastrel Cache for\n> fast\n> > access.\n> >\n> > Create table, change Toaster, change column type were changed to\n> > add new rows in PG_TOASTREL, to use this table and cache instead\n> > of altering pg_attribute with new column. For table creation from\n> > scratch when no TOAST tables were created is used special condition\n> > with version=0.\n> >\n> > DROP TABLE drops rows in PG_TOASTREL for this table. This allows to -\n> > DROP TOASTER command added. When no rows with the according Toaster\n> are\n> > present in PG_TOASTREL - it is considered unused and thus could be\n> safely\n> > dropped from the system.\n> >\n> > Default toaster 'deftoaster' (reference TOAST mechanics) cannot be\n> dropped.\n> >\n> > Working branch:\n> > https://github.com/postgrespro/postgres/tree/toastapi_with_ctl\n> >\n> > Would be glad to get any proposals and objections.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n> patching file src/backend/utils/cache/syscache.c\n> === Applying patches on top of PostgreSQL commit ID\n> 33ab0a2a527e3af5beee3a98fc07201e555d6e45 ===\n> === applying patch ./0001_toaster_interface_v24.patch\n> patching file contrib/test_decoding/expected/ddl.out\n> Hunk #2 FAILED at 874.\n> 1 out of 2 hunks FAILED -- saving rejects to file\n> src/backend/utils/cache/syscache.c.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_3490.log\n>\n> Regards,\n> Vignesh\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Sat, 7 Jan 2023 23:10:06 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Sun, 8 Jan 2023 at 01:40, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi!\n>\n> Thank you for your attention.\n> I've rebased the patchset onto the latest master (from 07.01), but the second part is still\n> in pre-patch shape - it is working, but tests are failing due to changes in TOAST relations\n> logic - I haven't adapted 'em yet.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nc44f6334ca6ff6d242d9eb6742441bc4e1294067 ===\n=== expanding ./0002_toaster_default_v25.patch.gz\n=== expanding ./0001_toaster_interface_v25.patch.gz\n=== expanding ./0004_drop_toaster_v25.patch.gz\n=== expanding ./0003_pg_toastrel_control_v25.patch.gz\n=== applying patch ./0001_toaster_interface_v25.patch\n....\npatching file src/include/postgres.h\nHunk #1 succeeded at 80 with fuzz 2 (offset 4 lines).\nHunk #2 FAILED at 148.\nHunk #3 FAILED at 315.\nHunk #4 FAILED at 344.\nHunk #5 FAILED at 359.\n4 out of 5 hunks FAILED -- saving rejects to file src/include/postgres.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3490.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 14 Jan 2023 12:26:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\nFails due to recent changes. Working on it.\n\nOn Sat, Jan 14, 2023 at 9:56 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Sun, 8 Jan 2023 at 01:40, Nikita Malakhov <hukutoc@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > Thank you for your attention.\n> > I've rebased the patchset onto the latest master (from 07.01), but the\n> second part is still\n> > in pre-patch shape - it is working, but tests are failing due to changes\n> in TOAST relations\n> > logic - I haven't adapted 'em yet.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n> === Applying patches on top of PostgreSQL commit ID\n> c44f6334ca6ff6d242d9eb6742441bc4e1294067 ===\n> === expanding ./0002_toaster_default_v25.patch.gz\n> === expanding ./0001_toaster_interface_v25.patch.gz\n> === expanding ./0004_drop_toaster_v25.patch.gz\n> === expanding ./0003_pg_toastrel_control_v25.patch.gz\n> === applying patch ./0001_toaster_interface_v25.patch\n> ....\n> patching file src/include/postgres.h\n> Hunk #1 succeeded at 80 with fuzz 2 (offset 4 lines).\n> Hunk #2 FAILED at 148.\n> Hunk #3 FAILED at 315.\n> Hunk #4 FAILED at 344.\n> Hunk #5 FAILED at 359.\n> 4 out of 5 hunks FAILED -- saving rejects to file\n> src/include/postgres.h.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_3490.log\n>\n> Regards,\n> Vignesh\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Fails due to recent changes. Working on it.On Sat, Jan 14, 2023 at 9:56 AM vignesh C <vignesh21@gmail.com> wrote:On Sun, 8 Jan 2023 at 01:40, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi!\n>\n> Thank you for your attention.\n> I've rebased the patchset onto the latest master (from 07.01), but the second part is still\n> in pre-patch shape - it is working, but tests are failing due to changes in TOAST relations\n> logic - I haven't adapted 'em yet.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nc44f6334ca6ff6d242d9eb6742441bc4e1294067 ===\n=== expanding ./0002_toaster_default_v25.patch.gz\n=== expanding ./0001_toaster_interface_v25.patch.gz\n=== expanding ./0004_drop_toaster_v25.patch.gz\n=== expanding ./0003_pg_toastrel_control_v25.patch.gz\n=== applying patch ./0001_toaster_interface_v25.patch\n....\npatching file src/include/postgres.h\nHunk #1 succeeded at 80 with fuzz 2 (offset 4 lines).\nHunk #2 FAILED at 148.\nHunk #3 FAILED at 315.\nHunk #4 FAILED at 344.\nHunk #5 FAILED at 359.\n4 out of 5 hunks FAILED -- saving rejects to file src/include/postgres.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3490.log\n\nRegards,\nVignesh\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sat, 14 Jan 2023 14:47:57 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On 2023-Jan-14, Nikita Malakhov wrote:\n\n> Hi!\n> Fails due to recent changes. Working on it.\n\nPlease see my response here\nhttps://postgr.es/m/20230203095540.zutul5vmsbmantbm@alvherre.pgsql\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n", "msg_date": "Fri, 3 Feb 2023 10:56:58 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi hackers!\n\nIn response to opinion in thread on Compresson dictionaries for JSONb [1]\n>The approaches are completely different,\n\n>but they seem to be trying to fix the same problem: the fact that the\n>default TOAST stuff isn't good enough for JSONB.\n\n\n\n\n\nThe problem, actually, is that the default TOAST is often not good for\n\n\n\n\nmodern loads and amounts of data.Pluggable TOAST is based not only\n\n\n\n\non pure enthusiasm, but on demands and tickets from production\n\n\n\n\ndatabases.The main demand is effective and transparent storage subsystem\n\n\n\n\nfor large values for some problematic types of data, which we already have,\n\n\n\n\nwith proven efficiency.\n\n\n\n\n\n\n\nSo we're really quite surprised that it has got so little feedback. We've\ngot\n\n\n\n\nsome opinions on approach but there is no any general one on the approach\n\n\n\n\nitself except doubts about the TOAST mechanism needs revision at all.\n\n\n\n\n\n\n\nCurrently we're busy revising the whole Pluggable TOAST API to make it\n\n\n\n\navailable as an extension and based on hooks to minimize changes in\n\n\n\n\nthe core. It will be available soon.\n\n\n\n> [1] https://postgr.es/m/20230203095540.zutul5vmsbmantbm@alvherre.pgsql\n>\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!In response to opinion in thread on Compresson dictionaries for JSONb [1]>The approaches are completely different,>but they seem to be trying to fix the same problem: the fact that the>default TOAST stuff isn't good enough for JSONB. The problem, actually, is that the default TOAST is often not good formodern loads and amounts of data.Pluggable TOAST is based not onlyon pure enthusiasm, but on demands and tickets from productiondatabases.The main demand is effective and transparent storage subsystemfor large values for some problematic types of data, which we already have,with proven efficiency.So we're really quite surprised that it has got so little feedback. We've gotsome opinions on approach but there is no any general one on the approachitself except doubts about the TOAST mechanism needs revision at all.Currently we're busy revising the whole Pluggable TOAST API to make itavailable as an extension and based on hooks to minimize changes inthe core. It will be available soon. [1]  https://postgr.es/m/20230203095540.zutul5vmsbmantbm@alvherre.pgsql--Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 6 Feb 2023 00:10:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 00:10:50 +0300, Nikita Malakhov wrote:\n> The problem, actually, is that the default TOAST is often not good for\n> modern loads and amounts of data.Pluggable TOAST is based not only\n> on pure enthusiasm, but on demands and tickets from production\n> databases.\n\n> The main demand is effective and transparent storage subsystem\n> for large values for some problematic types of data, which we already have,\n> with proven efficiency.\n\n> So we're really quite surprised that it has got so little feedback. We've\n> got\n> some opinions on approach but there is no any general one on the approach\n> itself except doubts about the TOAST mechanism needs revision at all.\n\nThe problem for me is that what you've been posting doesn't actually fix\nany problem, but instead introduces lots of new code and complexity.\n\n\n> Currently we're busy revising the whole Pluggable TOAST API to make it\n> available as an extension and based on hooks to minimize changes in\n> the core. It will be available soon.\n\nI don't think we should accept that either. It still doesn't improve\nanything about toast, it just allows you to do such improvements out of\ncore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 14:33:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n> > So we're really quite surprised that it has got so little feedback. We've\n> > got\n> > some opinions on approach but there is no any general one on the approach\n> > itself except doubts about the TOAST mechanism needs revision at all.\n>\n> The problem for me is that what you've been posting doesn't actually fix\n> any problem, but instead introduces lots of new code and complexity.\n\n> > Currently we're busy revising the whole Pluggable TOAST API to make it\n> > available as an extension and based on hooks to minimize changes in\n> > the core. It will be available soon.\n>\n> I don't think we should accept that either. It still doesn't improve\n> anything about toast, it just allows you to do such improvements out of\n> core.\n\nAgree. On top of that referencing non-reproducible benchmarks doesn't\nhelp. There were some slides referenced in the thread but I couldn't\nfind exact steps to reproduce the benchmarks.\n\nYour desire to improve the TOAST mechanism is much appreciated. I\nbelieve we are all on the same side here, the one where people work\ntogether to make PostgreSQL an even better DBMS.\n\nHowever in order to achieve this firstly a consensus within the\ncommunity should be reached about how exactly we are going to do this.\nAfterwards, all the code and benchmarks should be made publicly\navailable under a proper license so that anyone could explore and\nreproduce them. Last but not least, the complexity should really be\ntaken into account. There are real people who are going to maintain\nthe code after (and if) it will be merged, and there are not so many\nof them.\n\nThe problems I see are that the type-aware TOASTers skipped step (1)\nright to the step (2) and doesn't seem to consider (3). Even after it\nwas explicitly pointed out that we should take a step back and return\nto (1).\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 6 Feb 2023 13:17:30 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On 2023-Feb-06, Nikita Malakhov wrote:\n\n> Currently we're busy revising the whole Pluggable TOAST API to make it\n> available as an extension and based on hooks to minimize changes in\n> the core. It will be available soon.\n\nHmm, I'm not sure why would PGDG want to accept such a thing. I read\n\"minimize changes\" as \"open source Postgres can keep their crap\nimplementation and companies will offer good ones for a fee\". I'd\nrather have something that can give users direct benefit -- not hide it\nbehind proprietary licenses forcing each company to implement their own\nperformant toaster.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:49:17 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nExisting TOAST has several very painful drawbacks - lack of UPDATE\noperation, bloating of TOAST tables, and limits that are implicitly implied\non base tables by their TOAST tables, so it is seems not fair to say that\nPluggable TOAST does not solve any problems but just introduces new\nones.\n\nThe main reason behind this decision is that keeping the first\nimplementation\non the side of the vanilla (I mean rebasing it) over time is very difficult\ndue\nto the very invasive nature of this solution.\n\nSo we decided to reduce changes in the core to the minimum necessary\nto make it available through the hooks, because the hooks part is very\nlightweight and simple to keep rebasing onto the vanilla core. We plan\nto keep this extension free with the PostgreSQL license, so any PostgreSQL\nuser could benefit from the TOAST on steroids, and sometimes in the\nfuture it will be a much simpler task to integrate the Pluggable TOAST into\nthe vanilla, along with our advanced TOAST implementations which\nwe plan to keep under Open Source licenses too.\n\n\nOn Mon, Feb 6, 2023 at 1:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2023-Feb-06, Nikita Malakhov wrote:\n>\n> > Currently we're busy revising the whole Pluggable TOAST API to make it\n> > available as an extension and based on hooks to minimize changes in\n> > the core. It will be available soon.\n>\n> Hmm, I'm not sure why would PGDG want to accept such a thing. I read\n> \"minimize changes\" as \"open source Postgres can keep their crap\n> implementation and companies will offer good ones for a fee\". I'd\n> rather have something that can give users direct benefit -- not hide it\n> behind proprietary licenses forcing each company to implement their own\n> performant toaster.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Existing TOAST has several very painful drawbacks - lack of UPDATEoperation, bloating of TOAST tables, and limits that are implicitly impliedon base tables by their TOAST tables, so it is seems not fair to say thatPluggable TOAST does not solve any problems but just introduces newones.The main reason behind this decision is that keeping the first implementationon the side of the vanilla (I mean rebasing it) over time is very difficult dueto the very invasive nature of this solution.So we decided to reduce changes in the core to the minimum necessaryto make it available through the hooks, because the hooks part is verylightweight and simple to keep rebasing onto the vanilla core. We planto keep this extension free with the PostgreSQL license, so any PostgreSQLuser could benefit from the TOAST on steroids, and sometimes in thefuture it will be a much simpler task to integrate the Pluggable TOAST intothe vanilla, along with our advanced TOAST implementations whichwe plan to keep under Open Source licenses too.On Mon, Feb 6, 2023 at 1:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Feb-06, Nikita Malakhov wrote:\n\n> Currently we're busy revising the whole Pluggable TOAST API to make it\n> available as an extension and based on hooks to minimize changes in\n> the core. It will be available soon.\n\nHmm, I'm not sure why would PGDG want to accept such a thing.  I read\n\"minimize changes\" as \"open source Postgres can keep their crap\nimplementation and companies will offer good ones for a fee\".  I'd\nrather have something that can give users direct benefit -- not hide it\nbehind proprietary licenses forcing each company to implement their own\nperformant toaster.\n\n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 6 Feb 2023 16:38:01 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n> The main reason behind this decision is that keeping the first implementation\n> on the side of the vanilla (I mean rebasing it) over time is very difficult due\n> to the very invasive nature of this solution.\n>\n> So we decided to reduce changes in the core to the minimum necessary\n> to make it available through the hooks, because the hooks part is very\n> lightweight and simple to keep rebasing onto the vanilla core. We plan\n> to keep this extension free with the PostgreSQL license, so any PostgreSQL\n> user could benefit from the TOAST on steroids, and sometimes in the\n> future it will be a much simpler task to integrate the Pluggable TOAST into\n> the vanilla, along with our advanced TOAST implementations which\n> we plan to keep under Open Source licenses too.\n\nThat's great to hear. I'm looking forward to the link to the\ncorresponding GitHub repository. Please let us know when this effort\nwill be available for testing and benchmarking!\n\nI would like to point out however that there were several other pieces\nof feedback that could have been missed:\n\n* No one wants to see this as an extension. This was my original\nproposal (adding ZSON to /contrib/) and it was rejected. The community\nexplicitly wants this to be a core feature with its syntax,\nautocompletion, documentation and stuff.\n* The community wants the feature to have a simple implementation. You\nsaid yourself that the idea of type-aware TOASTers is very invasive,\nand I completely agree.\n* People also want this to be simple from the user perspective, as\nsimple as just CREATE COMPRESSED TABLE ... [USING lz4|zstd];\n\nAt least this is my personal summary/impression from following the mailing list.\n\nAnyhow since we are back to the stage where we discuss the RFC I\nsuggest continuing it in the compression dictionaries thread [1] since\nwe made noticeable progress there already.\n\n[1]: https://postgr.es/m/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 6 Feb 2023 17:38:34 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 16:38:01 +0300, Nikita Malakhov wrote:\n> So we decided to reduce changes in the core to the minimum necessary\n> to make it available through the hooks, because the hooks part is very\n> lightweight and simple to keep rebasing onto the vanilla core.\n\nAt least I don't think we should accept such hooks. I don't think I am alone\nin that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:24:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Mon, 6 Feb 2023 at 20:24, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-02-06 16:38:01 +0300, Nikita Malakhov wrote:\n> > So we decided to reduce changes in the core to the minimum necessary\n> > to make it available through the hooks, because the hooks part is very\n> > lightweight and simple to keep rebasing onto the vanilla core.\n>\n> At least I don't think we should accept such hooks. I don't think I am alone\n> in that.\n\n+1\n\nAssuming type-aware TOASTing is the goal, I don't think hooks are the\nright design to implement that.\n\n-Matthias van de Meent\n\n\n", "msg_date": "Mon, 6 Feb 2023 21:11:45 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Mon, 6 Feb 2023 at 15:38, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I would like to point out however that there were several other pieces\n> of feedback that could have been missed:\n>\n> * No one wants to see this as an extension. This was my original\n> proposal (adding ZSON to /contrib/) and it was rejected. The community\n> explicitly wants this to be a core feature with its syntax,\n> autocompletion, documentation and stuff.\n\nI believe that is a misrepresentation of the situation. ZSON had\n(has?) several systemic issues and could not be accepted to /contrib/\nin the way it was implemented, and it was commented that it would make\nsense that the feature of compression assisted by dictionaries would\nbe implemented in core. Yet still, that feature is only closely\nrelated to pluggable TOAST, but it is not the same as making TOAST\npluggable.\n\n> * The community wants the feature to have a simple implementation. You\n> said yourself that the idea of type-aware TOASTers is very invasive,\n> and I completely agree.\n\nI'm not sure that this is correct either. Compression (and TOAST) is\ninherently complex, and I don't think we should reject improvements\nbecause they are complex.\nThe problem that I see being raised here, is that there was little\ndiscussion and no observed community consensus about the design of\nthis complex feature *before* this patch with high complexity was\nprovided.\nThe next action that was requested is to take a step back and decide\nhow we would want to implement type-aware TOASTing (and the associated\npatch compression dictionaries) *before* we look into the type-aware\ntoasting.\n\n> * People also want this to be simple from the user perspective, as\n> simple as just CREATE COMPRESSED TABLE ... [USING lz4|zstd];\n\nCould you provide a reference to this? I have yet to see a COMPRESSED\nTABLE feature or syntax, let alone users asking for TOAST to be as\neasily usable as that feature or syntax.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 6 Feb 2023 21:14:54 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nIt'll be great to see more opinions on the approach as a whole.\n\n>The problem that I see being raised here, is that there was little\n>discussion and no observed community consensus about the design of\n>this complex feature *before* this patch with high complexity was\n>provided.\n>The next action that was requested is to take a step back and decide\n>how we would want to implement type-aware TOASTing (and the associated\n>patch compression dictionaries) *before* we look into the type-aware\n>toasting.\n\nWe decided to put this improvement as a patch because we thought\nthat the most complex and questionable part would be the TOAST\nimplementations (the Toasters) itself, and the Pluggable TOAST is\njust a tool to make plugging different TOAST implementations clean\nand simple.\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,It'll be great to see more opinions on the approach as a whole.>The problem that I see being raised here, is that there was little>discussion and no observed community consensus about the design of>this complex feature *before* this patch with high complexity was>provided.>The next action that was requested is to take a step back and decide>how we would want to implement type-aware TOASTing (and the associated>patch compression dictionaries) *before* we look into the type-aware>toasting.We decided to put this improvement as a patch because we thoughtthat the most complex and questionable part would be the TOASTimplementations (the Toasters) itself, and the Pluggable TOAST is just a tool to make plugging different TOAST implementations cleanand simple.--Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 6 Feb 2023 23:21:45 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n> I believe that is a misrepresentation of the situation. ZSON had\n> (has?) several systemic issues and could not be accepted to /contrib/\n> in the way it was implemented, and it was commented that it would make\n> sense that the feature of compression assisted by dictionaries would\n> be implemented in core. Yet still, that feature is only closely\n> related to pluggable TOAST, but it is not the same as making TOAST\n> pluggable.\n>\n> > * The community wants the feature to have a simple implementation. You\n> > said yourself that the idea of type-aware TOASTers is very invasive,\n> > and I completely agree.\n>\n> I'm not sure that this is correct either. Compression (and TOAST) is\n> inherently complex, and I don't think we should reject improvements\n> because they are complex.\n> The problem that I see being raised here, is that there was little\n> discussion and no observed community consensus about the design of\n> this complex feature *before* this patch with high complexity was\n> provided.\n\nStrictly speaking there is no such thing as \"community opinion\". There\nare different people, everyone has their own opinion. To make things\nmore interesting the opinions change with time.\n\nI did my best to make a brief summary of 100+ messages from different\npeople in something like 4 threads. These are things that were\nrequested and/or no one disagrees with (at least no one said \"no, put\nall this out of the core! and make it complicated too!\"). Focusing on\nsomething (almost) no one disagrees with seems to be more productive\nthan arguing about something everyone disagrees with.\n\nAs I see it, the goal is not to be right, but rather to find a\nconsensus most of us will be not unhappy with.\n\n> The next action that was requested is to take a step back and decide\n> how we would want to implement type-aware TOASTing (and the associated\n> patch compression dictionaries) *before* we look into the type-aware\n> toasting.\n\nYes, I thought we already agreed to forget about type-aware TOASTing\nand compression dictionaries, and are looking for a consensus now.\n\nTo clarify, I don't think that pluggable TOASTing is an absolutely bad\nidea. We are just not discussing this particular idea anymore, at\nleast for now.\n\n> > * People also want this to be simple from the user perspective, as\n> > simple as just CREATE COMPRESSED TABLE ... [USING lz4|zstd];\n>\n> Could you provide a reference to this? I have yet to see a COMPRESSED\n> TABLE feature or syntax, let alone users asking for TOAST to be as\n> easily usable as that feature or syntax.\n\nI was referring to the recent discussion of the new RFC. Please see\n[1] and below.\n\n[1]: https://www.postgresql.org/message-id/flat/20230203095540.zutul5vmsbmantbm%40alvherre.pgsql#7cce6acef0cb7eb2490715ec9d835e74\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 7 Feb 2023 13:05:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi, hackers!\n\nMaybe I've read the thread too superficially, but for me, it seems\nlike more of a discussion on what TOAST should NOT be. Maybe someone\nmore in the topic could explain what is the consensus on what we\nrequire and what we like to to have in a new TOAST?\n\nFor me, a good toast should be chunk-updatable, so that we don't need\nto rewrite the whole TOAST and WAL-replicate the whole thing at every\nsmall attribute modification. But obviously, it's just another\nopinion.\n\nKind regards,\nPavel Borisov\n\n\n", "msg_date": "Tue, 7 Feb 2023 14:38:20 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi!\n\nI have a question for the community - why was this patch silently put to a\n\"rejected\" status?\nShould there no be some kind of explanation?\n\nDuring this discussion I got the impression that for some reason some\nmembers of the community\ndo not want the TOAST functionality, which has such drawbacks that make it\nreally a curse for\nin many ways very good DBMS, to be touched. We cannot get rid of it because\nof backwards\ncompatibility, so the best way is to make it more adaptable and extensible\n- this is what this thread\nis about. We proposed our vision on how to extend the TOAST Postgres-way,\nlike Pluggable\nStorage some time before.\n\nThere are some very complex subjects left in this topic that really need a\ncommunity' attention.\nI've mentioned them above, but there was no feedback on them.\n\nPavel, we've already had an update implementation for TOAST. But it is a\npart of a Pluggable\nTOAST and it hardly would be here without it. I've started another thread\non extending the TOAST\npointer, maybe you would want to participate there [1].\n\nWe still would be grateful for feedback.\n\n[1] Extending the TOAST Pointer\n<https://www.postgresql.org/message-id/flat/CAJ7c6TNAYyeMYKVkiwOZChy7UpE_CkjpYOk73gcWTXMkLkEyzw%40mail.gmail.com#59aacdde27dd61277fe7c46c61c84b2c>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I have a question for the community - why was this patch silently put to a \"rejected\" status?Should there no be some kind of explanation? During this discussion I got the impression that for some reason some members of the communitydo not want the TOAST functionality, which has such drawbacks that make it really a curse forin many ways very good DBMS, to be touched. We cannot get rid of it because of backwardscompatibility, so the best way is to make it more adaptable and extensible - this is what this threadis about. We proposed our vision on how to extend the TOAST Postgres-way, like PluggableStorage some time before.There are some very complex subjects left in this topic that really need a community' attention.I've mentioned them above, but there was no feedback on them. Pavel, we've already had an update implementation for TOAST. But it is a part of a PluggableTOAST and it hardly would be here without it. I've started another thread on extending the TOASTpointer, maybe you would want to participate there [1].We still would be grateful for feedback.[1] Extending the TOAST Pointer-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Wed, 14 Jun 2023 13:05:24 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "On Wed, 14 Jun 2023 at 14:05, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi!\n>\n> I have a question for the community - why was this patch silently put to a \"rejected\" status?\n> Should there no be some kind of explanation?\n>\n> During this discussion I got the impression that for some reason some members of the community\n> do not want the TOAST functionality, which has such drawbacks that make it really a curse for\n> in many ways very good DBMS, to be touched. We cannot get rid of it because of backwards\n> compatibility, so the best way is to make it more adaptable and extensible - this is what this thread\n> is about. We proposed our vision on how to extend the TOAST Postgres-way, like Pluggable\n> Storage some time before.\n>\n> There are some very complex subjects left in this topic that really need a community' attention.\n> I've mentioned them above, but there was no feedback on them.\n>\n> Pavel, we've already had an update implementation for TOAST. But it is a part of a Pluggable\n> TOAST and it hardly would be here without it. I've started another thread on extending the TOAST\n> pointer, maybe you would want to participate there [1].\n>\n> We still would be grateful for feedback.\n>\n> [1] Extending the TOAST Pointer\nI don't see a clear reason it's rejected, besides technically it's\nWaiting on Author since January. If it's a mistake and the patch is\nup-to-date you can set an appropriate status.\n\nRegards,\nPavel Borisov,\nSupabase.\n\n\n", "msg_date": "Wed, 14 Jun 2023 14:21:05 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n> > I have a question for the community - why was this patch silently put to a \"rejected\" status?\n> > Should there no be some kind of explanation?\n\nI wouldn't say that it happened \"silently\" nor that the reason is so\nmysterious [1].\n\n[1]: https://www.postgresql.org/message-id/flat/CAM-w4HPjg7NwEWBtXn1empgAg3fqJHifHo_nhgqFWopiYaNxYg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 14 Jun 2023 13:56:34 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\nSeems that I missed the thread mentioned above. I strongly disagree\nwith such statement, Pluggable TOAST could not be a part or Compression\nDictionaries thread because the TOAST improvement is a more general subject,\nit involves much deeper and tricky changes in the core. And also is much\nmore\npromising in terms of performance and storage improvements.\n\nWe already have a lot of changes in Pluggable TOAST that were not committed\nto the main GIT branch of this thread, so it seems that I have to merge\nthem and\nreopen it.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Seems that I missed the thread mentioned above. I strongly disagreewith such statement, Pluggable TOAST could not be a part or CompressionDictionaries thread because the TOAST improvement is a more general subject,it involves much deeper and tricky changes in the core. And also is much morepromising in terms of performance and storage improvements.We already have a lot of changes in Pluggable TOAST that were not committedto the main GIT branch of this thread, so it seems that I have to merge them andreopen it.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 16 Jun 2023 13:18:49 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi Nikita,\n\n> We already have a lot of changes in Pluggable TOAST that were not committed\n> to the main GIT branch of this thread, so it seems that I have to merge them and\n> reopen it.\n\nPretty sure that reopening an already rejected patch that is competing\nwith compression dictionaries (which the rest of us are currently\nfocusing on) will achieve anything. Consider joining the compression\ndictionaries effort instead [1]. During the discussion with the\ncommunity it ended up being a TOAST improvement after all. So we could\nuse your expertise in this area.\n\n[1]: https://commitfest.postgresql.org/43/3626/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 16 Jun 2023 16:20:38 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" }, { "msg_contents": "Hi,\n\n> Pretty sure that reopening an already rejected patch that is competing\n> with compression dictionaries (which the rest of us are currently\n> focusing on) will achieve anything.\n\nOoops, I didn't mean to be sarcastic:\n\ns/will achieve/will not achieve/\n\nMy apologies.\n\n> Consider joining the compression\n> dictionaries effort instead [1]. During the discussion with the\n> community it ended up being a TOAST improvement after all. So we could\n> use your expertise in this area.\n>\n> [1]: https://commitfest.postgresql.org/43/3626/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 16 Jun 2023 17:44:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Pluggable toaster" } ]
[ { "msg_contents": "Hi,\n\nThere's a wait for replay that is open coded (instead of using the\nwait_for_catchup() routine), and sometimes the second of two such\nwaits at line 51 (in master) times out after 3 minutes with \"standby\nnever caught up\". It's happening on three particular Windows boxes,\nbut once also happened on the AIX box \"tern\".\n\n branch | animal | count\n---------------+-----------+-------\n HEAD | drongo | 1\n HEAD | fairywren | 8\n REL_10_STABLE | drongo | 3\n REL_10_STABLE | fairywren | 10\n REL_10_STABLE | jacana | 3\n REL_11_STABLE | drongo | 1\n REL_11_STABLE | fairywren | 4\n REL_11_STABLE | jacana | 3\n REL_12_STABLE | drongo | 2\n REL_12_STABLE | fairywren | 5\n REL_12_STABLE | jacana | 1\n REL_12_STABLE | tern | 1\n REL_13_STABLE | fairywren | 3\n REL_14_STABLE | drongo | 2\n REL_14_STABLE | fairywren | 6\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-30%2014:42:30\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-30%2013:13:22\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-30%2006:03:07\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-22%2011:37:37\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-22%2010:46:07\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-22%2009:03:06\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-17%2004:59:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-17%2003:59:51\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-16%2004:37:58\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-15%2009:57:14\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-15%2002:38:43\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-14%2020:42:15\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-14%2012:08:41\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-14%2000:35:32\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-13%2023:40:11\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-13%2022:47:25\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-09%2006:59:10\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-09%2006:04:04\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-09%2001:36:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-08%2019:20:35\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-08%2018:04:28\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-08%2014:12:32\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-12-08%2011:15:58\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-12-08%2004:04:22\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-12-03%2017:31:49\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-11-11%2015:58:55\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-10-02%2022:00:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-09-09%2005:16:43\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-08-24%2004:45:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-07-17%2010:57:49\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-06-12%2016:05:32\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-02-07%2012:59:43\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-03-24%2012:49:50\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-02-01%2018:00:27\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-02-01%2017:26:27\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-01-30%2023:49:49\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-12-22%2014:19:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-12-13%2000:12:11\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-12-09%2006:02:05\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-12-06%2003:07:42\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-11-02%2014:41:04\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-25%2013:12:08\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-24%2013:12:41\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-23%2023:10:00\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-23%2018:00:39\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-22%2015:05:57\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-18%2013:29:49\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-16%2014:54:46\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-15%2014:21:11\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-14%2013:15:07\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-13%2014:19:41\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-10-12%2016:32:06\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2019-10-10%2013:12:09\n\n\n", "msg_date": "Fri, 31 Dec 2021 09:01:51 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "\nOn 12/30/21 15:01, Thomas Munro wrote:\n> Hi,\n>\n> There's a wait for replay that is open coded (instead of using the\n> wait_for_catchup() routine), and sometimes the second of two such\n> waits at line 51 (in master) times out after 3 minutes with \"standby\n> never caught up\". It's happening on three particular Windows boxes,\n> but once also happened on the AIX box \"tern\".\n>\n> branch | animal | count\n> ---------------+-----------+-------\n> HEAD | drongo | 1\n> HEAD | fairywren | 8\n> REL_10_STABLE | drongo | 3\n> REL_10_STABLE | fairywren | 10\n> REL_10_STABLE | jacana | 3\n> REL_11_STABLE | drongo | 1\n> REL_11_STABLE | fairywren | 4\n> REL_11_STABLE | jacana | 3\n> REL_12_STABLE | drongo | 2\n> REL_12_STABLE | fairywren | 5\n> REL_12_STABLE | jacana | 1\n> REL_12_STABLE | tern | 1\n> REL_13_STABLE | fairywren | 3\n> REL_14_STABLE | drongo | 2\n> REL_14_STABLE | fairywren | 6\n>\n> \n\n\nFYI, drongo and fairywren are run on the same AWS/EC2 Windows Server\n2019 instance. Nothing else runs on it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 30 Dec 2021 15:35:45 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/30/21 15:01, Thomas Munro wrote:\n>> There's a wait for replay that is open coded (instead of using the\n>> wait_for_catchup() routine), and sometimes the second of two such\n>> waits at line 51 (in master) times out after 3 minutes with \"standby\n>> never caught up\". It's happening on three particular Windows boxes,\n>> but once also happened on the AIX box \"tern\".\n\n> FYI, drongo and fairywren are run on the same AWS/EC2 Windows Server\n> 2019 instance. Nothing else runs on it.\n\nI spent a little time looking into this just now. There are similar\nfailures in both 002_standby.pl and 003_standby_2.pl, which is\nunsurprising because there are essentially-identical test sequences\nin both. What I've realized is that the issue is triggered by\nthis sequence:\n\n$standby->start;\n...\n$primary->restart;\n$primary->safe_psql('postgres', 'checkpoint');\nmy $primary_lsn =\n $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');\n$standby->poll_query_until('postgres',\n\tqq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})\n or die \"standby never caught up\";\n\n(the failing poll_query_until is at line 51 in 002_standby.pl, or\nline 37 in 003_standby_2.pl). That is, we have forced a primary\nrestart since the standby first connected to the primary, and\nnow we have to wait for the standby to reconnect and catch up.\n\n*These two tests seem to be the only TAP tests that do that*.\nSo I think there's not really anything specific to commit_ts testing\ninvolved, it's just a dearth of primary restarts elsewhere.\n\nLooking at the logs in the failing cases, there's no evidence\nthat the standby has even detected the primary's disconnection,\nwhich explains why it hasn't attempted to reconnect. For\nexample, in the most recent HEAD failure,\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-01-03%2018%3A04%3A41\n\nthe standby reports successful connection:\n\n2022-01-03 18:58:04.920 UTC [179700:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\n(which we can also see in the primary's log), but after that\nthere's no log traffic at all except the test script's vain\nchecks of pg_last_wal_replay_lsn(). In the same animal's\nimmediately preceding successful run,\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2022-01-03%2015%3A04%3A41&stg=module-commit_ts-check\n\nwe see:\n\n2022-01-03 15:59:24.186 UTC [176664:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n2022-01-03 15:59:25.003 UTC [176664:2] LOG: replication terminated by primary server\n2022-01-03 15:59:25.003 UTC [176664:3] DETAIL: End of WAL reached on timeline 1 at 0/3030CB8.\n2022-01-03 15:59:25.003 UTC [176664:4] FATAL: could not send end-of-streaming message to primary: server closed the connection unexpectedly\n\t\tThis probably means the server terminated abnormally\n\t\tbefore or while processing the request.\n\tno COPY in progress\n2022-01-03 15:59:25.005 UTC [177092:5] LOG: invalid record length at 0/3030CB8: wanted 24, got 0\n...\n2022-01-03 15:59:25.564 UTC [177580:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nSo for some reason, on these machines detection of walsender-initiated\nconnection close is unreliable ... or maybe, the walsender didn't close\nthe connection, but is somehow still hanging around? Don't have much idea\nwhere to dig beyond that, but maybe someone else will. I wonder in\nparticular if this could be related to our recent discussions about\nwhether to use shutdown(2) on Windows --- could we need to do the\nequivalent of 6051857fc/ed52c3707 on walsender connections?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jan 2022 18:41:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "I wrote:\n> So for some reason, on these machines detection of walsender-initiated\n> connection close is unreliable ... or maybe, the walsender didn't close\n> the connection, but is somehow still hanging around? Don't have much idea\n> where to dig beyond that, but maybe someone else will. I wonder in\n> particular if this could be related to our recent discussions about\n> whether to use shutdown(2) on Windows --- could we need to do the\n> equivalent of 6051857fc/ed52c3707 on walsender connections?\n\n... wait a minute. After some more study of the buildfarm logs,\nit was brought home to me that these failures started happening\njust after 6051857fc went in:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&branch=&member=&stage=module-commit_tsCheck&filter=Submit\n\nThe oldest matching failure is jacana's on 2021-12-03.\n(The above sweep finds an unrelated-looking failure on 2021-11-11,\nbut no others before 6051857fc went in on 2021-12-02. Also, it\nlooks likely that ed52c3707 on 2021-12-07 made the failure more\nprobable, because jacana's is the only matching failure before 12-07.)\n\nSo I'm now thinking it's highly likely that those commits are\ncausing it somehow, but how?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jan 2022 20:17:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hello Tom,\n09.01.2022 04:17, Tom Lane wrote:\n>> So for some reason, on these machines detection of walsender-initiated\n>> connection close is unreliable ... or maybe, the walsender didn't close\n>> the connection, but is somehow still hanging around? Don't have much idea\n>> where to dig beyond that, but maybe someone else will. I wonder in\n>> particular if this could be related to our recent discussions about\n>> whether to use shutdown(2) on Windows --- could we need to do the\n>> equivalent of 6051857fc/ed52c3707 on walsender connections?\n> ... wait a minute. After some more study of the buildfarm logs,\n> it was brought home to me that these failures started happening\n> just after 6051857fc went in:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&branch=&member=&stage=module-commit_tsCheck&filter=Submit\n>\n> The oldest matching failure is jacana's on 2021-12-03.\n> (The above sweep finds an unrelated-looking failure on 2021-11-11,\n> but no others before 6051857fc went in on 2021-12-02. Also, it\n> looks likely that ed52c3707 on 2021-12-07 made the failure more\n> probable, because jacana's is the only matching failure before 12-07.)\n>\n> So I'm now thinking it's highly likely that those commits are\n> causing it somehow, but how?\n>\nI've managed to reproduce this failure too.\nRemoving \"shutdown(MyProcPort->sock, SD_SEND);\" doesn't help here, so\nthe culprit is exactly \"closesocket(MyProcPort->sock);\".\nI've added `system(\"netstat -ano\");` before die() in 002_standby.pl and see:\n# Postmaster PID for node \"primary\" is 944\n  Proto  Local Address          Foreign Address        State           PID\n...\n  TCP    127.0.0.1:58545        127.0.0.1:61995        FIN_WAIT_2      944\n...\n  TCP    127.0.0.1:61995        127.0.0.1:58545        CLOSE_WAIT      1352\n\n(Replacing SD_SEND with SD_BOTH doesn't change the behaviour.)\n\nLooking at the libpqwalreceiver.c:\n        /* Now that we've consumed some input, try again */\n        rawlen = PQgetCopyData(conn->streamConn, &conn->recvBuf, 1);\nhere we get -1 on the primary disconnection.\nThen we get COMMAND_OK here:\n        res = libpqrcv_PQgetResult(conn->streamConn);\n        if (PQresultStatus(res) == PGRES_COMMAND_OK)\nand finally just hang at:\n            /* Verify that there are no more results. */\n            res = libpqrcv_PQgetResult(conn->streamConn);\nuntil the standby gets interrupted by the TAP test. (That call can also\nreturn NULL and then the test completes successfully.)\nGoing down through the call chain, I see that at the end of it\nWaitForMultipleObjects() hangs while waiting for the primary connection\nsocket event. So it looks like the socket, that is closed by the\nprimary, can get into a state unsuitable for WaitForMultipleObjects().\nI tried to check the socket state with the WSAPoll() function and\ndiscovered that it returns POLLHUP for the \"problematic\" socket.\nThe following draft addition in latch.c:\nint\nWaitLatchOrSocket(Latch *latch, int wakeEvents, pgsocket sock,\n                  long timeout, uint32 wait_event_info)\n{\n    int            ret = 0;\n    int            rc;\n    WaitEvent    event;\n\n#ifdef WIN32\n    if (wakeEvents & WL_SOCKET_MASK) {\n        WSAPOLLFD pollfd;\n        pollfd.fd = sock;\n        pollfd.events = POLLRDNORM | POLLWRNORM;\n        pollfd.revents = 0;\n        int rc = WSAPoll(&pollfd, 1, 0);\n        if ((rc == 1) && (pollfd.revents & POLLHUP)) {\n            elog(WARNING, \"WaitLatchOrSocket: A stream-oriented\nconnection was either disconnected or aborted.\");\n            return WL_SOCKET_MASK;\n        }\n    }\n#endif\n\nmakes the test 002_stanby.pl pass (100 of 100 iterations, while without\nthe fix I get failures roughly on each third). I'm not sure where to\nplace this check, maybe it's better to move it up to\nlibpqrcv_PQgetResult() to minimize it's footprint or to find less\nWindows-specific approach, but I'd prefer a client-side fix anyway, as\ngraceful closing a socket by a server seems a legitimate action.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 9 Jan 2022 14:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 09.01.2022 04:17, Tom Lane wrote:\n>> ... wait a minute. After some more study of the buildfarm logs,\n>> it was brought home to me that these failures started happening\n>> just after 6051857fc went in:\n\n> I've managed to reproduce this failure too.\n> Removing \"shutdown(MyProcPort->sock, SD_SEND);\" doesn't help here, so\n> the culprit is exactly \"closesocket(MyProcPort->sock);\".\n\nUgh. Did you try removing the closesocket and keeping shutdown?\nI don't recall if we tried that combination before.\n\n> ... I'm not sure where to\n> place this check, maybe it's better to move it up to\n> libpqrcv_PQgetResult() to minimize it's footprint or to find less\n> Windows-specific approach, but I'd prefer a client-side fix anyway, as\n> graceful closing a socket by a server seems a legitimate action.\n\nWhat concerns me here is whether this implies that other clients\n(libpq, jdbc, etc) are going to need changes as well. Maybe\nlibpq is okay, because we've not seen failures of the isolation\ntests that use pg_cancel_backend(), but still it's worrisome.\nI'm not entirely sure whether the isolationtester would notice\nthat a connection that should have died didn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Jan 2022 11:49:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 12:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Going down through the call chain, I see that at the end of it\n> WaitForMultipleObjects() hangs while waiting for the primary connection\n> socket event. So it looks like the socket, that is closed by the\n> primary, can get into a state unsuitable for WaitForMultipleObjects().\n\nI wonder if FD_CLOSE is edge-triggered, and it's already told us once.\nI think that's what these Python Twisted guys are saying:\n\nhttps://stackoverflow.com/questions/7598936/how-can-a-disconnected-tcp-socket-be-reliably-detected-using-msgwaitformultipleo\n\n> I tried to check the socket state with the WSAPoll() function and\n> discovered that it returns POLLHUP for the \"problematic\" socket.\n\nGood discovery. I guess if the above theory is right, there's a\nmemory somewhere that makes this level-triggered as expected by users\nof poll().\n\n\n", "msg_date": "Mon, 10 Jan 2022 08:06:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 8:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Jan 10, 2022 at 12:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > Going down through the call chain, I see that at the end of it\n> > WaitForMultipleObjects() hangs while waiting for the primary connection\n> > socket event. So it looks like the socket, that is closed by the\n> > primary, can get into a state unsuitable for WaitForMultipleObjects().\n>\n> I wonder if FD_CLOSE is edge-triggered, and it's already told us once.\n\nCan you reproduce it with this patch?", "msg_date": "Mon, 10 Jan 2022 15:00:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "10.01.2022 05:00, Thomas Munro wrote:\n> On Mon, Jan 10, 2022 at 8:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Mon, Jan 10, 2022 at 12:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>>> Going down through the call chain, I see that at the end of it\n>>> WaitForMultipleObjects() hangs while waiting for the primary connection\n>>> socket event. So it looks like the socket, that is closed by the\n>>> primary, can get into a state unsuitable for WaitForMultipleObjects().\n>> I wonder if FD_CLOSE is edge-triggered, and it's already told us once.\n> Can you reproduce it with this patch?\nUnfortunately, this fix (with the correction \"(cur_event &\nWL_SOCKET_MASK)\" -> \"(cur_event->events & WL_SOCKET_MASK\") doesn't work,\nbecause we have two separate calls to libpqrcv_PQgetResult():\n> Then we get COMMAND_OK here:\n>         res = libpqrcv_PQgetResult(conn->streamConn);\n>         if (PQresultStatus(res) == PGRES_COMMAND_OK)\n> and finally just hang at:\n>             /* Verify that there are no more results. */\n>             res = libpqrcv_PQgetResult(conn->streamConn);\nThe libpqrcv_PQgetResult function, in turn, invokes WaitLatchOrSocket()\nwhere WaitEvents are defined locally, and the closed flag set on the\nfirst invocation but expected to be checked on second.\n>> I've managed to reproduce this failure too.\n>> Removing \"shutdown(MyProcPort->sock, SD_SEND);\" doesn't help here, so\n>> the culprit is exactly \"closesocket(MyProcPort->sock);\".\n>>\n> Ugh. Did you try removing the closesocket and keeping shutdown?\n> I don't recall if we tried that combination before.\nEven with shutdown() only I still observe WaitForMultipleObjects()\nhanging (and WSAPoll() returns POLLHUP for the socket).\n\nAs to your concern regarding other clients, I suspect that this issue is\ncaused by libpqwalreceiver' specific call pattern and may be other\nclients just don't do that. I need some more time to analyze this.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 10 Jan 2022 10:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> The libpqrcv_PQgetResult function, in turn, invokes WaitLatchOrSocket()\n> where WaitEvents are defined locally, and the closed flag set on the\n> first invocation but expected to be checked on second.\n\nD'oh, right. There's also a WaitLatchOrSocket call in walreceiver.c.\nWe'd need a long-lived WaitEventSet common across all of these sites,\nwhich is hard here (because the socket might change under you, as\ndiscussed in other threads that introduced long lived WaitEventSets to\nother places but not here).\n\n/me wonders if it's possible that graceful FD_CLOSE is reported only\nonce, but abortive/error FD_CLOSE is reported multiple times...\n\n\n", "msg_date": "Mon, 10 Jan 2022 22:20:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 10:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Jan 10, 2022 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > The libpqrcv_PQgetResult function, in turn, invokes WaitLatchOrSocket()\n> > where WaitEvents are defined locally, and the closed flag set on the\n> > first invocation but expected to be checked on second.\n>\n> D'oh, right. There's also a WaitLatchOrSocket call in walreceiver.c.\n> We'd need a long-lived WaitEventSet common across all of these sites,\n> which is hard here (because the socket might change under you, as\n> discussed in other threads that introduced long lived WaitEventSets to\n> other places but not here).\n\nThis is super quick-and-dirty code (and doesn't handle some errors or\nsocket changes correctly), but does it detect the closed socket?", "msg_date": "Mon, 10 Jan 2022 22:40:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "10.01.2022 12:40, Thomas Munro wrote:\n> This is super quick-and-dirty code (and doesn't handle some errors or\n> socket changes correctly), but does it detect the closed socket?\nYes, it fixes the behaviour and makes the 002_standby test pass (100 of\n100 iterations). I'm yet to find out whether the other\nWaitLatchOrSocket' users (e. g. postgres_fdw) can suffer from the\ndisconnected socket state, but this approach definitely works for\nwalreceiver.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 10 Jan 2022 20:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Tue, Jan 11, 2022 at 6:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 10.01.2022 12:40, Thomas Munro wrote:\n> > This is super quick-and-dirty code (and doesn't handle some errors or\n> > socket changes correctly), but does it detect the closed socket?\n\n> Yes, it fixes the behaviour and makes the 002_standby test pass (100 of\n> 100 iterations).\n\nThanks for testing. That result does seem to confirm the hypothesis\nthat FD_CLOSE is reported only once for the socket on graceful\nshutdown (that is, it's edge-triggered and incidentally you won't get\nFD_READ), so you need to keep track of it carefully. Incidentally,\nanother observation is that your WSAPoll() test appears to be\nreturning POLLHUP where at least Linux, FreeBSD and Solaris would not:\na socket that is only half shut down (the primary shut down its end\ngracefully, but walreceiver did not), so I suspect Windows' POLLHUP\nmight have POLLRDHUP semantics.\n\n> I'm yet to find out whether the other\n> WaitLatchOrSocket' users (e. g. postgres_fdw) can suffer from the\n> disconnected socket state, but this approach definitely works for\n> walreceiver.\n\nI see where you're going: there might be safe call sequences and\nunsafe call sequences, and maybe walreceiver is asking for trouble by\ndouble-polling. I'm not sure about that; I got the impression\nrecently that it's possible to get FD_CLOSE while you still have\nbuffered data to read, so then the next recv() will return > 0 and\nthen we don't have any state left anywhere to remember that we saw\nFD_CLOSE, even if you're careful to poll and read in the ideal\nsequence. I could be wrong, and it would be nice if there is an easy\nfix along those lines... The documentation around FD_CLOSE is\nunclear.\n\nI do plan to make a higher quality patch like the one I showed\n(material from earlier unfinished work[1] that needs a bit more\ninfrastructure), but to me that's new feature/efficiency work, not\nsomething we'd want to back-patch.\n\nHmm, one thing I'm still unclear on: did this problem really start\nwith 6051857fc/ed52c3707? My initial email in this thread lists\nsimilar failures going back further, doesn't it? (And what's tern\ndoing mixed up in this mess?)\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJPaygh-6WHEd0FnH89GrkTpVyN_ew9ckv3%2BnwjmLcSeg%40mail.gmail.com#aa33ec3e7ad85499f35dd1434a139c3f\n\n\n", "msg_date": "Tue, 11 Jan 2022 09:52:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "\nOn 1/10/22 15:52, Thomas Munro wrote:\n>\n> Hmm, one thing I'm still unclear on: did this problem really start\n> with 6051857fc/ed52c3707? My initial email in this thread lists\n> similar failures going back further, doesn't it? (And what's tern\n> doing mixed up in this mess?)\n\n\n\nYour list contains at least some false positives. e.g.\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-12-22%2014:19:02>\nwhich has a different script failing.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 10 Jan 2022 16:21:39 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm, one thing I'm still unclear on: did this problem really start\n> with 6051857fc/ed52c3707? My initial email in this thread lists\n> similar failures going back further, doesn't it? (And what's tern\n> doing mixed up in this mess?)\n\nWell, those earlier ones may be committs failures, but a lot of\nthem contain different-looking symptoms, eg pg_ctl failures.\n\ntern's failure at\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-07-17+10%3A57%3A49\ndoes look similar, but we can see in its log that the standby\n*did* notice the primary disconnection and then reconnect:\n\n2021-07-17 16:29:08.248 UTC [17498380:2] LOG: replication terminated by primary server\n2021-07-17 16:29:08.248 UTC [17498380:3] DETAIL: End of WAL reached on timeline 1 at 0/30378F8.\n2021-07-17 16:29:08.248 UTC [17498380:4] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n2021-07-17 16:29:08.248 UTC [25166230:5] LOG: invalid record length at 0/30378F8: wanted 24, got 0\n2021-07-17 16:29:08.350 UTC [16318578:1] FATAL: could not connect to the primary server: server closed the connection unexpectedly\n\t\tThis probably means the server terminated abnormally\n\t\tbefore or while processing the request.\n2021-07-17 16:29:36.369 UTC [7077918:1] FATAL: could not connect to the primary server: FATAL: the database system is starting up\n2021-07-17 16:29:36.380 UTC [11338028:1] FATAL: could not connect to the primary server: FATAL: the database system is starting up\n...\n2021-07-17 16:29:36.881 UTC [17367092:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nSo I'm not sure what happened there, but it's not an instance\nof this problem. One thing that looks a bit suspicious is\nthis in the primary's log:\n\n2021-07-17 16:26:47.832 UTC [12386550:1] LOG: using stale statistics instead of current ones because stats collector is not responding\n\nwhich makes me wonder if the timeout is down to out-of-date\npg_stats data. The loop in 002_standby.pl doesn't appear to\ndepend on the stats collector:\n\nmy $primary_lsn =\n $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');\n$standby->poll_query_until('postgres',\n\tqq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})\n or die \"standby never caught up\";\n\nbut maybe I'm missing the connection.\n\nApropos of that, it's worth noting that wait_for_catchup *is*\ndependent on up-to-date stats, and here's a recent run where\nit sure looks like the timeout cause is AWOL stats collector:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-01-10%2004%3A51%3A34\n\nI wonder if we should refactor wait_for_catchup to probe the\nstandby directly instead of relying on the upstream's view.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Jan 2022 16:25:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "10.01.2022 23:52, Thomas Munro wrote:\n>> I'm yet to find out whether the other \n>> WaitLatchOrSocket' users (e. g. postgres_fdw) can suffer from the\n>> disconnected socket state, but this approach definitely works for\n>> walreceiver.\n> I see where you're going: there might be safe call sequences and\n> unsafe call sequences, and maybe walreceiver is asking for trouble by\n> double-polling. I'm not sure about that; I got the impression\n> recently that it's possible to get FD_CLOSE while you still have\n> buffered data to read, so then the next recv() will return > 0 and\n> then we don't have any state left anywhere to remember that we saw\n> FD_CLOSE, even if you're careful to poll and read in the ideal\n> sequence. I could be wrong, and it would be nice if there is an easy\n> fix along those lines... The documentation around FD_CLOSE is\n> unclear.\nI had no strong opinion regarding unsafe sequence, though initially I\nsuspected that exactly the second libpqrcv_PQgetResult call could cause\nthe issue. But after digging into WaitLatchOrSocket I'd inclined to put\nthe fix deeper to satisfy all possible callers.\nAt the other hand, I've shared Tom's concerns regarding other clients,\nthat can stuck on WaitForMultipleObjects() just as walreceiver does, and\nhoped that only walreceiver suffer from a graceful server socket closing.\nSo to get these doubts cleared, I've made a simple test for postgres_fdw\n(please look at the attachment; you can put it into\ncontrib/postgres_fdw/t and run `vcregress taptest contrib\\postgres_fdw`).\nThis test shows for me:\n===\n...\nt/001_disconnection.pl .. # 12:13:39.481084 executing query...\n# 12:13:43.245277 result:       0\n# 0|0\n\n# 12:13:43.246342 executing query...\n# 12:13:46.525924 result:       0\n# 0|0\n\n# 12:13:46.527097 executing query...\n# 12:13:47.745176 result:       3\n#\n# psql:<stdin>:1: WARNING:  no connection to the server\n# psql:<stdin>:1: ERROR:  FATAL:  terminating connection due to\nadministrator co\nmmand\n# server closed the connection unexpectedly\n#       This probably means the server terminated abnormally\n#       before or while processing the request.\n# CONTEXT:  remote SQL command: FETCH 100 FROM c1\n# 12:13:47.794612 executing query...\n# 12:13:51.073318 result:       0\n# 0|0\n\n# 12:13:51.074347 executing query...\n===\n\nWith the simple logging added to connection.c:\n                /* Sleep until there's something to do */\nelog(LOG, \"pgfdw_get_result before WaitLatchOrSocket\");\n                wc = WaitLatchOrSocket(MyLatch,\n                                       WL_LATCH_SET | WL_SOCKET_READABLE |\n                                       WL_EXIT_ON_PM_DEATH,\n                                       PQsocket(conn),\n                                       -1L, PG_WAIT_EXTENSION);\nelog(LOG, \"pgfdw_get_result after WaitLatchOrSocket\");\n\nI see in 001_disconnection_local.log:\n...\n2022-01-11 15:13:52.875 MSK|Administrator|postgres|61dd747f.5e4|LOG: \npgfdw_get_result after WaitLatchOrSocket\n2022-01-11 15:13:52.875\nMSK|Administrator|postgres|61dd747f.5e4|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-11 15:13:52.875 MSK|Administrator|postgres|61dd747f.5e4|LOG: \npgfdw_get_result before WaitLatchOrSocket\n2022-01-11 15:13:52.875\nMSK|Administrator|postgres|61dd747f.5e4|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-11 15:14:36.976 MSK|||61dd74ac.840|DEBUG:  autovacuum:\nprocessing database \"postgres\"\n2022-01-11 15:14:51.088 MSK|Administrator|postgres|61dd747f.5e4|LOG: \npgfdw_get_result after WaitLatchOrSocket\n2022-01-11 15:14:51.088\nMSK|Administrator|postgres|61dd747f.5e4|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-11 15:14:51.089 MSK|Administrator|postgres|61dd747f.5e4|LOG: \npgfdw_get_result before WaitLatchOrSocket\n2022-01-11 15:14:51.089\nMSK|Administrator|postgres|61dd747f.5e4|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-11 15:15:37.006 MSK|||61dd74e9.9e8|DEBUG:  autovacuum:\nprocessing database \"postgres\"\n2022-01-11 15:16:37.116 MSK|||61dd7525.ad0|DEBUG:  autovacuum:\nprocessing database \"postgres\"\n2022-01-11 15:17:37.225 MSK|||61dd7561.6a0|DEBUG:  autovacuum:\nprocessing database \"postgres\"\n2022-01-11 15:18:36.916 MSK|||61dd7470.704|LOG:  checkpoint starting: time\n...\n2022-01-11 15:36:38.225 MSK|||61dd79d6.2a0|DEBUG:  autovacuum:\nprocessing database \"postgres\"\n...\n\nSo here we get similar hanging on WaitLatchOrSocket().\nJust to make sure that it's indeed the same issue, I've removed socket\nshutdown&close and the test executed to the end (several times). Argh.\n\nBest regards,\nAlexander", "msg_date": "Tue, 11 Jan 2022 18:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 4:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> So here we get similar hanging on WaitLatchOrSocket().\n> Just to make sure that it's indeed the same issue, I've removed socket\n> shutdown&close and the test executed to the end (several times). Argh.\n\nOuch. I think our options at this point are:\n1. Revert 6051857fc (and put it back when we have a working\nlong-lived WES as I showed). This is not very satisfying, now that we\nunderstand the bug, because even without that change I guess you must\nbe able to reach the hanging condition by using Windows postgres_fdw\nto talk to a non-Windows server (ie a normal TCP stack with graceful\nshutdown/linger on process exit).\n2. Put your poll() check into the READABLE side. There's some\nprecedent for that sort of kludge on the WRITEABLE side (and a\nrejection of the fragile idea that clients of latch.c should only\nperform \"safe\" sequences):\n\n /*\n * Windows does not guarantee to log an FD_WRITE network event\n * indicating that more data can be sent unless the previous send()\n * failed with WSAEWOULDBLOCK. While our caller might well have made\n * such a call, we cannot assume that here. Therefore, if waiting for\n * write-ready, force the issue by doing a dummy send(). If the dummy\n * send() succeeds, assume that the socket is in fact write-ready, and\n * return immediately. Also, if it fails with something other than\n * WSAEWOULDBLOCK, return a write-ready indication to let our caller\n * deal with the error condition.\n */\n\n\n", "msg_date": "Wed, 12 Jan 2022 09:10:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Ouch. I think our options at this point are:\n> 1. Revert 6051857fc (and put it back when we have a working\n> long-lived WES as I showed). This is not very satisfying, now that we\n> understand the bug, because even without that change I guess you must\n> be able to reach the hanging condition by using Windows postgres_fdw\n> to talk to a non-Windows server (ie a normal TCP stack with graceful\n> shutdown/linger on process exit).\n\nIt'd be worth checking, perhaps. One thing I've been wondering all\nalong is how much of this behavior is specific to the local-loopback\ncase where Windows can see both ends of the connection. You'd think\nthat they couldn't long get away with such blatant violations of the\nTCP specs when talking to external servers, because the failures\nwould be visible to everyone with a web browser.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jan 2022 15:16:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "> 11.01.2022 23:16, Tom Lane wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>> Ouch. I think our options at this point are:\n>>> 1. Revert 6051857fc (and put it back when we have a working\n>>> long-lived WES as I showed). This is not very satisfying, now that we\n>>> understand the bug, because even without that change I guess you must\n>>> be able to reach the hanging condition by using Windows postgres_fdw\n>>> to talk to a non-Windows server (ie a normal TCP stack with graceful\n>>> shutdown/linger on process exit).\n>> It'd be worth checking, perhaps. One thing I've been wondering all\n>> along is how much of this behavior is specific to the local-loopback\n>> case where Windows can see both ends of the connection. You'd think\n>> that they couldn't long get away with such blatant violations of the\n>> TCP specs when talking to external servers, because the failures\n>> would be visible to everyone with a web browser.\nI've split my test (both parts attached) and run it on two virtual\nmachines with clean builds from master (ac7c8075) on both (just the\ndebugging output added to connection.c). I provide probably redundant\ninfo (also see attached screenshot) just to make sure that I didn't make\na mistake.\nThe excerpt from 001_disconnection1_local.log:\n...\n2022-01-12 09:29:48.099 MSK|Administrator|postgres|61de755a.a54|LOG: \npgfdw_get_result: before WaitLatchOrSocket\n2022-01-12 09:29:48.099\nMSK|Administrator|postgres|61de755a.a54|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-12 09:29:48.100 MSK|Administrator|postgres|61de755a.a54|LOG: \npgfdw_get_result: after WaitLatchOrSocket\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-12 09:29:48.100 MSK|Administrator|postgres|61de755a.a54|LOG: \npgfdw_get_result: before WaitLatchOrSocket\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-12 09:29:48.100 MSK|Administrator|postgres|61de755a.a54|LOG: \npgfdw_get_result: after WaitLatchOrSocket\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n2022-01-12 09:29:48.100 MSK|Administrator|postgres|61de755a.a54|ERROR: \nFATAL:  terminating connection due to administrator command\n    server closed the connection unexpectedly\n        This probably means the server terminated abnormally\n        before or while processing the request.\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|CONTEXT:  remote SQL command:\nFETCH 100 FROM c1\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|WARNING:  no connection to the\nserver\n2022-01-12 09:29:48.100\nMSK|Administrator|postgres|61de755a.a54|CONTEXT:  remote SQL command:\nABORT TRANSACTION\n2022-01-12 09:29:48.107 MSK|Administrator|postgres|61de755a.a54|LOG: \ndisconnection: session time: 0:00:01.577 user=Administrator\ndatabase=postgres host=127.0.0.1 port=49752\n2022-01-12 09:29:48.257 MSK|[unknown]|[unknown]|61de755c.a4c|LOG: \nconnection received: host=127.0.0.1 port=49754\n2022-01-12 09:29:48.261 MSK|Administrator|postgres|61de755c.a4c|LOG: \nconnection authenticated: identity=\"WIN-FCPSOVMM1JC\\Administrator\"\nmethod=sspi\n(C:/src/postgrespro/contrib/postgres_fdw/tmp_check/t_001_disconnection1_local_data/pgdata/pg_hba.conf:2)\n2022-01-12 09:29:48.261 MSK|Administrator|postgres|61de755c.a4c|LOG: \nconnection authorized: user=Administrator database=postgres\napplication_name=001_disconnection1.pl\n2022-01-12 09:29:48.263 MSK|Administrator|postgres|61de755c.a4c|LOG: \nstatement: SELECT * FROM large WHERE a = fx2(a)\n2022-01-12 09:29:48.285 MSK|Administrator|postgres|61de755c.a4c|LOG: \npgfdw_get_result: before WaitLatchOrSocket\n2022-01-12 09:29:48.285\nMSK|Administrator|postgres|61de755c.a4c|STATEMENT:  SELECT * FROM large\nWHERE a = fx2(a)\n...\n\nBy the look of things, you are right and this is the localhost-only issue.\nI've rechecked that the test 001_disconnection.pl (local-loopback\nversion) hangs on both machines while 001_disconnection1.pl performs\nsuccessfully in both directions. I'm not sure whether the Windows client\nand non-Windows server or reverse combinations are of interest in light\nof the above.\n\nBest regards,\nAlexander", "msg_date": "Wed, 12 Jan 2022 10:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Wed, Jan 12, 2022 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> By the look of things, you are right and this is the localhost-only issue.\n\nBut can't that be explained with timing races? You change some stuff\naround and it becomes less likely that you get a FIN to arrive in a\nsuper narrow window, which I'm guessing looks something like: recv ->\nEWOULDBLOCK, [receive FIN], wait -> FD_CLOSE, wait [hangs]. Note that\nit's not happening on several Windows BF animals, and the ones it is\nhappening on only do it only every few weeks.\n\nHere's a draft attempt at a fix. First I tried to use recv(fd, &c, 1,\nMSG_PEEK) == 0 to detect EOF, which seemed to me to be a reasonable\nenough candidate, but somehow it corrupts the stream (!?), so I used\nAlexander's POLLHUP idea, except I pushed it down to a more principled\nplace IMHO. Then I suppressed it after the initial check because then\nthe logic from my earlier patch takes over, so stuff like FeBeWaitSet\ndoesn't suffer from extra calls, only these two paths that haven't\nbeen converted to long-lived WESes yet. Does this pass the test?\n\nI wonder if this POLLHUP technique is reliable enough (I know that\nwouldn't work on other systems[1], which is why I was trying to make\nMSG_PEEK work...).\n\nWhat about environment variable PG_TEST_USE_UNIX_SOCKETS=1, does it\nreproduce with that set, and does the patch fix it? I'm hoping that\nexplains some Windows CI failures from a nearby thread[2].\n\n[1] https://illumos.topicbox.com/groups/developer/T5576767e764aa26a-Maf8f3460c2866513b0ac51bf\n[2] https://www.postgresql.org/message-id/flat/CALT9ZEG%3DC%3DJSypzt2gz6NoNtx-ew2tYHbwiOfY_xNo%2ByBY_%3Djw%40mail.gmail.com", "msg_date": "Thu, 13 Jan 2022 19:36:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "13.01.2022 09:36, Thomas Munro wrote:\n> On Wed, Jan 12, 2022 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> By the look of things, you are right and this is the localhost-only issue.\n> But can't that be explained with timing races? You change some stuff\n> around and it becomes less likely that you get a FIN to arrive in a\n> super narrow window, which I'm guessing looks something like: recv ->\n> EWOULDBLOCK, [receive FIN], wait -> FD_CLOSE, wait [hangs]. Note that\n> it's not happening on several Windows BF animals, and the ones it is\n> happening on only do it only every few weeks.\nBut I still see the issue when I run both test parts on a single\nmachine: first instance is `vcregress taptest src\\test\\restart` and the\nsecond `set NO_TEMP_INSTALL=1 & vcregress taptest contrib/postgres_fdw`\n(see attachment).\n\nI'll try new tests and continue investigation later today/tomorrow. Thanks!\n\nBest regards,\nAlexander", "msg_date": "Thu, 13 Jan 2022 10:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Thu, Jan 13, 2022 at 7:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... First I tried to use recv(fd, &c, 1,\n> MSG_PEEK) == 0 to detect EOF, which seemed to me to be a reasonable\n> enough candidate, but somehow it corrupts the stream (!?),\n\nAhh, that'd be because recv() and friends are redirected to our\nwrappers in socket.c, where we use the overlapped Winsock API (that\nis, async network IO), which is documented as not supporting MSG_PEEK.\nOK then.\n\nAndres and I chatted about this stuff off list and he pointed out\nsomething else about the wrappers in socket.c: there are more paths in\nthere that work with socket events, which means more ways to lose the\nprecious FD_CLOSE event. I don't know if any of those paths are\nreachable in the relevant cases, but it does look a little bit like\nthe lack of graceful shutdown might have been hiding a whole class of\nevent tracking bug.\n\n\n", "msg_date": "Fri, 14 Jan 2022 20:31:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "13.01.2022 09:36, Thomas Munro wrote:\n> Here's a draft attempt at a fix. First I tried to use recv(fd, &c, 1,\n> MSG_PEEK) == 0 to detect EOF, which seemed to me to be a reasonable\n> enough candidate, but somehow it corrupts the stream (!?), so I used\n> Alexander's POLLHUP idea, except I pushed it down to a more principled\n> place IMHO. Then I suppressed it after the initial check because then\n> the logic from my earlier patch takes over, so stuff like FeBeWaitSet\n> doesn't suffer from extra calls, only these two paths that haven't\n> been converted to long-lived WESes yet. Does this pass the test?\nYes, this fix eliminates the flakiness for me. The test commit_ts (with\n002_standby and 003_standby_2) passed 2x200 iterations.\nIt also makes my test postgres_fdw/001_disconnection pass reliably.\n> I wonder if this POLLHUP technique is reliable enough (I know that\n> wouldn't work on other systems[1], which is why I was trying to make\n> MSG_PEEK work...).\n>\n> What about environment variable PG_TEST_USE_UNIX_SOCKETS=1, does it\n> reproduce with that set, and does the patch fix it? I'm hoping that\n> explains some Windows CI failures from a nearby thread[2].\nWith PG_TEST_USE_UNIX_SOCKETS=1 the test commit_ts/002_standby fails on\nthe unpatched HEAD:\nt/002_standby.pl .... 1/4 # poll_query_until timed out executing this query:\n# SELECT '0/303C628'::pg_lsn <= pg_last_wal_replay_lsn()\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n# Looks like your test exited with 25 just after 1.\nt/002_standby.pl .... Dubious, test returned 25 (wstat 6400, 0x1900)\n\n002_standby_primary.log contains:\n2022-01-13 18:57:32.925 PST [1448] LOG:  starting PostgreSQL 15devel,\ncompiled by Visual C++ build 1928, 64-bit\n2022-01-13 18:57:32.926 PST [1448] LOG:  listening on Unix socket\n\"C:/Users/1/AppData/Local/Temp/yOKQPH1FoO/.s.PGSQL.62733\"\n\nThe same with my postgres_fdw test:\n# 03:41:12.533246 result:       0\n# 0|0\n# 03:41:12.534758 executing query...\n# 03:41:14.267594 result:       3\n#\n# psql:<stdin>:1: WARNING:  no connection to the server\n# psql:<stdin>:1: ERROR:  FATAL:  terminating connection due to\nadministrator command\n# server closed the connection unexpectedly\n#       This probably means the server terminated abnormally\n#       before or while processing the request.\n# CONTEXT:  remote SQL command: FETCH 100 FROM c1\n# 03:41:14.270449 executing query...\n# 03:41:14.334437 result:       3\n#\n# psql:<stdin>:1: ERROR:  could not connect to server \"fpg\"\n# DETAIL:  connection to server on socket\n\"C:/Users/1/AppData/Local/Temp/hJWD9mzPHM/.s.PGSQL.57414\" failed:\nConnection refused (0x0000274D/10061)\n#       Is the server running locally and accepting connections on that\nsocket?\n# 03:41:14.336918 executing query...\n# 03:41:14.422381 result:       3\n#\n# psql:<stdin>:1: ERROR:  could not connect to server \"fpg\"\n# DETAIL:  connection to server on socket\n\"C:/Users/1/AppData/Local/Temp/hJWD9mzPHM/.s.PGSQL.57414\" failed:\nConnection refused (0x0000274D/10061)\n#       Is the server running locally and accepting connections on that\nsocket?\n# 03:41:14.425628 executing query...\n...hang...\n\nWith the patch these tests pass successfully.\n\nI can also confirm that on Windows 10 20H2 (previous tests were\nperformed on Windows Server 2012) the unpatched HEAD +\nPG_TEST_USE_UNIX_SOCKETS=1 hangs on src\\test\\recovery\\001_stream_rep (on\niterations 1, 1, 4 for me).\n(v9-0001-Add-option-for-amcheck-and-pg_amcheck-to-check-un.patch [1] not\nrequired to see that.)\n001_stream_rep_primary.log contains:\n...\n2022-01-13 19:46:48.287 PST [1364] LOG:  listening on Unix socket\n\"C:/Users/1/AppData/Local/Temp/EWzapwiXjV/.s.PGSQL.58248\"\n2022-01-13 19:46:48.317 PST [6736] LOG:  database system was shut down\nat 2022-01-13 19:46:37 PST\n2022-01-13 19:46:48.331 PST [1364] LOG:  database system is ready to\naccept connections\n2022-01-13 19:46:49.536 PST [1088] 001_stream_rep.pl LOG:  statement:\nCREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a\n2022-01-13 19:46:49.646 PST [3028] 001_stream_rep.pl LOG:  statement:\nSELECT pg_current_wal_insert_lsn()\n2022-01-13 19:46:49.745 PST [3360] 001_stream_rep.pl LOG:  statement:\nSELECT '0/3023268' <= replay_lsn AND state = 'streaming' FROM\npg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n...\n2022-01-13 19:50:39.755 PST [4924] 001_stream_rep.pl LOG:  statement:\nSELECT '0/3023268' <= replay_lsn AND state = 'streaming' FROM\npg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n2022-01-13 19:50:39.928 PST [5924] 001_stream_rep.pl LOG:  statement:\nSELECT '0/3023268' <= replay_lsn AND state = 'streaming' FROM\npg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n\nWithout PG_TEST_USE_UNIX_SOCKETS=1 and without the fix the\n001_stream_rep hangs too (but on iterations 3, 8, 2). So it seems that\nusing unix sockets increases the fail rate.\n\nWith the fix 100 iterations with PG_TEST_USE_UNIX_SOCKETS=1 and 40 (and\nstill counting) iterations without PG_TEST_USE_UNIX_SOCKETS pass.\n\nSo the fix looks as absolutely working to me with the tests that we have\nfor now.\n\n[1]\nhttps://www.postgresql.org/message-id/CALT9ZEHx2%2B9rqAeAANkUXJCsTueQqdx2Tt6ypaig9tozJkWvkQ%40mail.gmail.com\n\n\n\n", "msg_date": "Fri, 14 Jan 2022 11:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-14 20:31:22 +1300, Thomas Munro wrote:\n> Andres and I chatted about this stuff off list and he pointed out\n> something else about the wrappers in socket.c: there are more paths in\n> there that work with socket events, which means more ways to lose the\n> precious FD_CLOSE event.\n\nI think it doesn't even need to touch socket.c to cause breakage. Using two\ndifferent WaitEventSets is enough.\n\nhttps://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsaeventselect\nsays:\n\n> The FD_CLOSE network event is recorded when a close indication is received\n> for the virtual circuit corresponding to the socket. In TCP terms, this\n> means that the FD_CLOSE is recorded when the connection goes into the TIME\n> WAIT or CLOSE WAIT states. This results from the remote end performing a\n> shutdown on the send side or a closesocket. FD_CLOSE being posted after all\n> data is read from a socket\n\nSo FD_CLOSE is *recorded* internally when the connection is closed. But only\nposted to the visible event once all data is read. All good so far. But\ncombine that with:\n\n> Issuing a WSAEventSelect for a socket cancels any previous WSAAsyncSelect or\n> WSAEventSelect for the same socket and clears the internal network event\n> record.\n\nNote the bit about clearing the internal network event record. Which seems to\npretty precisely explain why we're loosing FD_CLOSEs?\n\nAnd it does also explain why this is more likely after the shutdown changes:\nIt's more likely the network stack knows it has readable data *and* that the\nconnection closed. Which is recorded in the \"internal network event\nrecord\". But once all the data is read, walsender.c will do another\nWaitLatchOrSocket(), which does WSAEventSelect(), clearing the \"internal event\nrecord\" and loosing the FD_CLOSE.\n\n\nMy first inclination was that we ought to wrap the socket created for windows\nin pgwin32_socket() in a custom type with some additional data - including\ninformation about already received events, an EVENT, etc. I think that might\nhelp to remove a lot of the messy workarounds we have in socket.c etc. But: It\nwouldn't do much good here, because the socket is not a socket created by\nsocket.c but by libpq :(.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jan 2022 12:28:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-14 12:28:48 -0800, Andres Freund wrote:\n> But once all the data is read, walsender.c will do another\n> WaitLatchOrSocket(), which does WSAEventSelect(), clearing the \"internal event\n> record\" and loosing the FD_CLOSE.\n\nWalreceiver only started using WES in\n2016-03-29 [314cbfc5d] Add new replication mode synchronous_commit = 'remote_ap\n\nWith that came the following comment:\n\n /*\n * Ideally we would reuse a WaitEventSet object repeatedly\n * here to avoid the overheads of WaitLatchOrSocket on epoll\n * systems, but we can't be sure that libpq (or any other\n * walreceiver implementation) has the same socket (even if\n * the fd is the same number, it may have been closed and\n * reopened since the last time). In future, if there is a\n * function for removing sockets from WaitEventSet, then we\n * could add and remove just the socket each time, potentially\n * avoiding some system calls.\n */\n Assert(wait_fd != PGINVALID_SOCKET);\n rc = WaitLatchOrSocket(MyLatch,\n WL_EXIT_ON_PM_DEATH | WL_SOCKET_READABLE |\n WL_TIMEOUT | WL_LATCH_SET,\n wait_fd,\n NAPTIME_PER_CYCLE,\n WAIT_EVENT_WAL_RECEIVER_MAIN);\n\nI don't really see how libpq could have changed the socket underneath us, as\nlong as we get it the first time after the connection successfully was\nestablished? I mean, there's a running command that we're processing the\nresult of? Nor do I understand what \"any other walreceiver implementation\"\nrefers to?\n\nThomas, I think you wrote that?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jan 2022 12:47:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Sat, Jan 15, 2022 at 9:28 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it doesn't even need to touch socket.c to cause breakage. Using two\n> different WaitEventSets is enough.\n\nRight. I was interested in your observation because so far we'd\n*only* been considering the two-consecutive-WaitEventSets case, which\nwe could grok experimentally. The patch Alexander tested most\nrecently uses a tri-state eof flag, so (1) the WES event starts out in\n\"unknown\" state and polls with WSAPoll() to figure out whether the\nsocket was already closed when it wasn't looking, and then (2) it\nswitches to believing that we'll definitely get an FD_CLOSE so we\ndon't need to make the extra call on every wait. That does seem to do\nthe job, but if there are *other* places that can eat FD_CLOSE event\nonce we've switched to believing that it will come, that might be\nfatal to the second part of that idea, and we might have to assume\n\"unknown\" all the time, which would be somewhat similar to the way we\ndo a dummy WSASend() every time when waiting for WRITEABLE.\n\n(That patch is assuming that we're looking for something simple to\nback-patch, in the event that we decide not to just revert the\ngraceful shutdown patch from back branches. Reverting might be a\nbetter idea for now, and then we could fix all this stuff going\nforward.)\n\n> > Issuing a WSAEventSelect for a socket cancels any previous WSAAsyncSelect or\n> > WSAEventSelect for the same socket and clears the internal network event\n> > record.\n>\n> Note the bit about clearing the internal network event record. Which seems to\n> pretty precisely explain why we're loosing FD_CLOSEs?\n\nIndeed.\n\n> And it does also explain why this is more likely after the shutdown changes:\n> It's more likely the network stack knows it has readable data *and* that the\n> connection closed. Which is recorded in the \"internal network event\n> record\". But once all the data is read, walsender.c will do another\n> WaitLatchOrSocket(), which does WSAEventSelect(), clearing the \"internal event\n> record\" and loosing the FD_CLOSE.\n\nYeah.\n\n\n", "msg_date": "Sat, 15 Jan 2022 10:59:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Sat, Jan 15, 2022 at 10:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> (That patch is assuming that we're looking for something simple to\n> back-patch, in the event that we decide not to just revert the\n> graceful shutdown patch from back branches. Reverting might be a\n> better idea for now, and then we could fix all this stuff going\n> forward.)\n\n(Though, as mentioned already, reverting isn't really enough either,\nbecause other OSes that Windows might be talking to use lingering\nsockets... and there may still be ways for this to break...)\n\n\n", "msg_date": "Sat, 15 Jan 2022 11:01:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-15 10:59:00 +1300, Thomas Munro wrote:\n> On Sat, Jan 15, 2022 at 9:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it doesn't even need to touch socket.c to cause breakage. Using two\n> > different WaitEventSets is enough.\n>\n> Right. I was interested in your observation because so far we'd\n> *only* been considering the two-consecutive-WaitEventSets case, which\n> we could grok experimentally.\n\nThere likely are further problems in other parts, but I think socket.c is\nunlikely to be involved in walreceiver case - there shouldn't be any socket.c\nstyle socket in walreceiver itself, nor do I think we are doing a\nsend/recv/select backed by socket.c.\n\n\n> The patch Alexander tested most recently uses a tri-state eof flag [...]\n\nWhat about instead giving WalReceiverConn an internal WaitEventSet, and using\nthat consistently? I've attached a draft for that.\n\nAlexander, could you test with that patch applied?\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 14 Jan 2022 14:44:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> (That patch is assuming that we're looking for something simple to\n> back-patch, in the event that we decide not to just revert the\n> graceful shutdown patch from back branches. Reverting might be a\n> better idea for now, and then we could fix all this stuff going\n> forward.)\n\nFWIW, I'm just fine with reverting, particularly in the back branches.\nIt seems clear that this dank corner of Windows contains even more\ncreepy-crawlies than we thought.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jan 2022 17:51:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Sat, Jan 15, 2022 at 11:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > The patch Alexander tested most recently uses a tri-state eof flag [...]\n>\n> What about instead giving WalReceiverConn an internal WaitEventSet, and using\n> that consistently? I've attached a draft for that.\n>\n> Alexander, could you test with that patch applied?\n\nIsn't your patch nearly identical to one that I already posted, that\nAlexander tested and reported success with here?\n\nhttps://www.postgresql.org/message-id/5d507424-13ce-d19f-2f5d-ab4c6a987316%40gmail.com\n\nI can believe that fixes walreceiver (if we're sure that there isn't a\nlibpq-changes-the-socket problem), but AFAICS the same problem exists\nfor postgres_fdw and async append. That's why I moved to trying to\nfix the multiple-WES thing (though of course I agree we should be\nusing long lived WESes wherever possible, I just didn't think that\nseemed back-patchable, so it's more of a feature patch for the\nfuture).\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:19:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Sat, Jan 15, 2022 at 9:47 AM Andres Freund <andres@anarazel.de> wrote:\n> Walreceiver only started using WES in\n> 2016-03-29 [314cbfc5d] Add new replication mode synchronous_commit = 'remote_ap\n>\n> With that came the following comment:\n>\n> /*\n> * Ideally we would reuse a WaitEventSet object repeatedly\n> * here to avoid the overheads of WaitLatchOrSocket on epoll\n> * systems, but we can't be sure that libpq (or any other\n> * walreceiver implementation) has the same socket (even if\n> * the fd is the same number, it may have been closed and\n> * reopened since the last time). In future, if there is a\n> * function for removing sockets from WaitEventSet, then we\n> * could add and remove just the socket each time, potentially\n> * avoiding some system calls.\n> */\n> Assert(wait_fd != PGINVALID_SOCKET);\n> rc = WaitLatchOrSocket(MyLatch,\n> WL_EXIT_ON_PM_DEATH | WL_SOCKET_READABLE |\n> WL_TIMEOUT | WL_LATCH_SET,\n> wait_fd,\n> NAPTIME_PER_CYCLE,\n> WAIT_EVENT_WAL_RECEIVER_MAIN);\n>\n> I don't really see how libpq could have changed the socket underneath us, as\n> long as we get it the first time after the connection successfully was\n> established? I mean, there's a running command that we're processing the\n> result of?\n\nErm, I didn't analyse the situation much back then, I just knew that\nlibpq could reconnect in early phases. I can see that once you reach\nthat stage you can count on socket stability though, so yeah that\nshould work as long as you can handle it correctly in the earlier\nconnection phase (probably using the other patch I posted and\nAlexander tested), it should all work nicely. You'd probably want to\nformalise the interface/documentation on that point.\n\n> Nor do I understand what \"any other walreceiver implementation\"\n> refers to?\n\nI think I meant that it goes via function pointers to talk to\nlibpqwalreceiver.c, but I know now that we don't actually support\nusing that to switch to different code, it's just a solution to a\nbackend/frontend linking problem. The comment was probably just\nparanoia based on the way the interface works.\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:40:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-15 13:19:42 +1300, Thomas Munro wrote:\n> On Sat, Jan 15, 2022 at 11:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > > The patch Alexander tested most recently uses a tri-state eof flag [...]\n> >\n> > What about instead giving WalReceiverConn an internal WaitEventSet, and using\n> > that consistently? I've attached a draft for that.\n> >\n> > Alexander, could you test with that patch applied?\n> \n> Isn't your patch nearly identical to one that I already posted, that\n> Alexander tested and reported success with here?\n\nSorry, somehow I missed that across the many patches in the thread... And yes,\nit does look remarkably similar.\n\n\n> https://www.postgresql.org/message-id/5d507424-13ce-d19f-2f5d-ab4c6a987316%40gmail.com\n> \n> I can believe that fixes walreceiver\n\nOne thing that still bothers me around this is that we didn't detect the\nproblem of the dead walreceiver connection, even after missing the\nFD_CLOSE. There's plenty other ways that a connection can get stalled for\nprolonged was, that we'd not get notified about either. That's why there's\nwal_receiver_timeout, after all.\n\nBut from what I can see wal_receiver_timeout doesn't work even halfway\nreliably, because of all the calls down to libpqrcv_PQgetResult, where we just\nblock indefinitely?\n\nThis actually seems like a significant issue to me, and not just on windows.\n\n\n\n> (if we're sure that there isn't a libpq-changes-the-socket problem)\n\nI just don't see what that problem could be, once connection is\nestablished. The only way a the socket fd could change is a reconnect, which\ndoesn't happen automatically.\n\nI actually was searching the archives for threads on it, but I didn't find\nmuch besides the references around [1]. And I didn't see a concrete risk\nexplained there?\n\n\n> but AFAICS the same problem exists for postgres_fdw and async append.\n\nPerhaps - but I suspect it'll matter far less with them than with walreceiver.\n\n\n\n> That's why I moved to trying to\n> fix the multiple-WES thing (though of course I agree we should be\n> using long lived WESes wherever possible\n\nThat approach seems like a very very leaky bandaid, with a decent potential\nfor unintended consequences. Perhaps there's nothing better that we can do,\nbut I'd rather try to fix the problem closer to the root...\n\n\n> I just didn't think that seemed back-patchable, so it's more of a feature\n> patch for the future).\n\nHm, it doesn't seem crazy invasive to me. But I guess we might be looking at a\nrevert of the shutdown changes for now anyway? In that case we should be\nfixing this anyway, but we might be able to afford doing it in master only?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BzCNJZBXcURPdQvdY-tjyD0y7Li2wZEC6XChyUej1S5w%40mail.gmail.com\n\n\n", "msg_date": "Fri, 14 Jan 2022 16:51:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hello Andres,\n15.01.2022 01:44, Andres Freund wrote:\n> What about instead giving WalReceiverConn an internal WaitEventSet, and using\n> that consistently? I've attached a draft for that.\n>\n> Alexander, could you test with that patch applied?\nUnfortunately, this patch doesn't fix the issue. The test\ncommit_ts/002_standby still fails (on iterations 3, 1, 8 for me).\nWith the debugging logging added:\n...\nelog(LOG, \"libpqrcv_wait() before WaitEventSetWait(%p)\", conn->wes);\n��� nevents = WaitEventSetWait(conn->wes, timeout, &event, 1,\nwait_event_info);\nelog(LOG, \"libpqrcv_wait() after WaitEventSetWait\");\n...\nelog(LOG, \"WaitEventSetWaitBlock before WaitForMultipleObjects(%p)\",\nset->handles);\n��� rc = WaitForMultipleObjects(set->nevents + 1, set->handles, FALSE,\n��� ��� ��� ��� ��� ��� ��� ��� cur_timeout);\nelog(LOG, \"WaitEventSetWaitBlock aftet WaitForMultipleObjects\");\n...\nand so on.\n\nI see in 002_standby_standby.log\n...\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_receive PQgetCopyData\nreturned: 145\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_receive PQgetCopyData\nreturned: 0\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_wait() before\nWaitEventSetWait(000000000063ABA8)\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWait before\nWaitEventSetWaitBlock(000000000063ABA8)\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWaitBlock before\nWaitForMultipleObjects(000000000063AC30)\n2022-01-16 13:31:36.244 MSK [2820] LOG:� WaitEventSetWaitBlock aftet\nWaitForMultipleObjects\n2022-01-16 13:31:36.244 MSK [2820] LOG:� WaitEventSetWait after\nWaitEventSetWaitBlock\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWaitBlock aftet\nWaitForMultipleObjects\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWait after\nWaitEventSetWaitBlock\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_wait() after\nWaitEventSetWait\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_receive PQgetCopyData\nreturned: 0\n2022-01-16 13:31:36.244 MSK [1336] LOG:� libpqrcv_wait() before\nWaitEventSetWait(000000000063ABA8)\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWait before\nWaitEventSetWaitBlock(000000000063ABA8)\n2022-01-16 13:31:36.244 MSK [1336] LOG:� WaitEventSetWaitBlock before\nWaitForMultipleObjects(000000000063AC30)\n2022-01-16 13:31:36.244 MSK [2820] LOG:� WaitEventSetWait before\nWaitEventSetWaitBlock(0000000000649FB8)\n2022-01-16 13:31:36.244 MSK [2820] LOG:� WaitEventSetWaitBlock before\nWaitForMultipleObjects(000000000064A020)\n2022-01-16 13:31:36.247 MSK [1336] LOG:� WaitEventSetWaitBlock aftet\nWaitForMultipleObjects\n2022-01-16 13:31:36.247 MSK [1336] LOG:� WaitEventSetWait after\nWaitEventSetWaitBlock\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_wait() after\nWaitEventSetWait\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_receive PQgetCopyData\nreturned: -1\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_receive before\nlibpqrcv_PQgetResult(1)\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_receive\nlibpqrcv_PQgetResult(1) returned 0000000000692400\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_receive before\nlibpqrcv_PQgetResult(2)\n2022-01-16 13:31:36.247 MSK [1336] LOG:� libpqrcv_wait() before\nWaitEventSetWait(000000000063ABA8)\n2022-01-16 13:31:36.247 MSK [1336] LOG:� WaitEventSetWait before\nWaitEventSetWaitBlock(000000000063ABA8)\n2022-01-16 13:31:36.247 MSK [1336] LOG:� WaitEventSetWaitBlock before\nWaitForMultipleObjects(000000000063AC30)\n2022-01-16 13:31:36.368 MSK [984] LOG:� WaitEventSetWaitBlock aftet\nWaitForMultipleObjects\n2022-01-16 13:31:36.368 MSK [984] LOG:� WaitEventSetWait after\nWaitEventSetWaitBlock\n...\nAfter that, the process 1336 hangs till shutdown.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 16 Jan 2022 14:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-14 17:51:52 -0500, Tom Lane wrote:\n> FWIW, I'm just fine with reverting, particularly in the back branches.\n> It seems clear that this dank corner of Windows contains even more\n> creepy-crawlies than we thought.\n\nSeems we should revert now-ish? There's a minor release coming up and I think\nit'd be bad to ship these changes to users.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jan 2022 12:02:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-14 17:51:52 -0500, Tom Lane wrote:\n>> FWIW, I'm just fine with reverting, particularly in the back branches.\n>> It seems clear that this dank corner of Windows contains even more\n>> creepy-crawlies than we thought.\n\n> Seems we should revert now-ish? There's a minor release coming up and I think\n> it'd be bad to ship these changes to users.\n\nSure. Do we want to revert in HEAD too?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jan 2022 15:35:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-01-24 15:35:25 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-14 17:51:52 -0500, Tom Lane wrote:\n> >> FWIW, I'm just fine with reverting, particularly in the back branches.\n> >> It seems clear that this dank corner of Windows contains even more\n> >> creepy-crawlies than we thought.\n> \n> > Seems we should revert now-ish? There's a minor release coming up and I think\n> > it'd be bad to ship these changes to users.\n> \n> Sure. Do we want to revert in HEAD too?\n\nNot sure. I'm also OK with trying to go with Thomas' patch to walreceiver and\ntry a bit longer to get all this working. Thomas?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jan 2022 13:28:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Tue, Jan 25, 2022 at 10:28 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-24 15:35:25 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-01-14 17:51:52 -0500, Tom Lane wrote:\n> > >> FWIW, I'm just fine with reverting, particularly in the back branches.\n> > >> It seems clear that this dank corner of Windows contains even more\n> > >> creepy-crawlies than we thought.\n> >\n> > > Seems we should revert now-ish? There's a minor release coming up and I think\n> > > it'd be bad to ship these changes to users.\n> >\n> > Sure. Do we want to revert in HEAD too?\n>\n> Not sure. I'm also OK with trying to go with Thomas' patch to walreceiver and\n> try a bit longer to get all this working. Thomas?\n\nI vote for reverting in release branches only. I'll propose a better\nWES patch set for master that hopefully also covers async append etc\n(which I was already planning to do before we knew about this Windows\nproblem). More soon.\n\n\n", "msg_date": "Tue, 25 Jan 2022 15:28:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Jan 25, 2022 at 10:28 AM Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-01-24 15:35:25 -0500, Tom Lane wrote:\n>>> Sure. Do we want to revert in HEAD too?\n\n>> Not sure. I'm also OK with trying to go with Thomas' patch to walreceiver and\n>> try a bit longer to get all this working. Thomas?\n\n> I vote for reverting in release branches only. I'll propose a better\n> WES patch set for master that hopefully also covers async append etc\n> (which I was already planning to do before we knew about this Windows\n> problem). More soon.\n\nWFM, but we'll have to remember to revert this in v15 if we don't\nhave a solid fix by then.\n\nIt's kinda late here, so I'll push the reverts tomorrow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jan 2022 21:50:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Tue, Jan 25, 2022 at 3:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I vote for reverting in release branches only. I'll propose a better\n> > WES patch set for master that hopefully also covers async append etc\n> > (which I was already planning to do before we knew about this Windows\n> > problem). More soon.\n>\n> WFM, but we'll have to remember to revert this in v15 if we don't\n> have a solid fix by then.\n\nPhew, after a couple of days of very slow compile/test cycles on\nWindows exploring a couple of different ideas, I finally have\nsomething new. First let me recap the three main ideas in this\nthread:\n\n1. It sounds like no one really loves the WSAPoll() kludge, even\nthough it apparently works for simple cases. It's not totally clear\nthat it really works in enough cases, for one thing. It doesn't allow\nfor a socket to be in two WESes at the same time, and I'm not sure I\nwant to bank on Winsock's WSAPoll() being guaranteed to report POLLHUP\nwhen half closed (as mentioned, no other OS does AFAIK).\n\n2. The long-lived-WaitEventSets-everywhere concept was initially\nappealling to me and solves the walreceiver problem (when combined\nwith a sticky seen_fd_close flag), and I've managed to get that\nworking correctly across libpq reconnects. As mentioned, I also have\nsome toy patches along those lines for the equivalent but more complex\nproblem in postgres_fdw, because I've been studying how to make\nparallel append generate a tidy stream of epoll_wait()/kevent() calls,\ninstead of a quadratic explosion of setup/teardown spam. I'll write\nsome more about those patches and hopefully propose them soon anyway,\nbut on reflection I don't really want that Unix efficiency problem to\nbe tangled up with this Windows correctness problem. That'd require a\nprogramming rule that I don't want to burden us with forever: you'd\n*never* be able to use a socket in more than one WaitEventSet, and\nWaitLatchOrSocket() would have to be removed.\n\n3. The real solution to this problem is to recognise that we just\nhave the event objects in the wrong place. WaitEventSets shouldn't\nown them: they need to be 1:1 with sockets, or Winsock will eat\nevents. Likewise for the flag you need for edge->level conversion, or\n*we'll* eat events. Having now tried that, it's starting to feel like\nthe best way forward, even though my initial prototype (see attached)\nis maybe a tad cumbersome with bookkeeping. I believe it means that\nall existing coding patterns *should* now be safe (not yet confirmed\nby testing), and we're free to put sockets in multiple WESes even at\nthe same time if the need arises.\n\nThe basic question is: how should a socket user find the associated\nevent handle and flags? Some answers:\n\n1. \"pgsocket\" could become a pointer to a heap-allocated wrapper\nobject containing { socket, event, flags } on Windows, or something\nlike that, but that seems a bit invasive and tangled up with public\nAPIs like libpq, which put me off trying that. I'm willing to explore\nit if people object to my other idea.\n\n2. \"pgsocket\" could stay unchanged, but we could have a parallel\narray with extra socket state, indexed by file descriptor. We could\nuse new socket()/close() libpq events so that libpq's sockets could be\nregistered in this scheme without libpq itself having to know anything\nabout that. That worked pretty nicely when I developed it on my\nFreeBSD box, but on Windows I soon learned that SOCKET is really yet\nanother name for HANDLE, so it's not a dense number space anchored at\n0 like Unix file descriptors. The array could be prohibitively big.\n\n3. I tried the same as #2 but with a hash table, and ran into another\nsmall problem when putting it all together: we probably don't want to\nlongjump out of libpq callbacks on allocation failure. So, I modified\nsimplehash to add a no-OOM behaviour. That's the POC patch set I'm\nattaching for show-and-tell. Some notes and TODOs in the commit\nmessages and comments.\n\nThoughts?", "msg_date": "Tue, 1 Feb 2022 18:02:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-02-01 18:02:34 +1300, Thomas Munro wrote:\n> 1. It sounds like no one really loves the WSAPoll() kludge, even\n> though it apparently works for simple cases.\n\nYea, at least I don't :)\n\n\n> 2. The long-lived-WaitEventSets-everywhere concept was initially\n> appealling to me and solves the walreceiver problem (when combined\n> with a sticky seen_fd_close flag), and I've managed to get that\n> working correctly across libpq reconnects. As mentioned, I also have\n> some toy patches along those lines for the equivalent but more complex\n> problem in postgres_fdw, because I've been studying how to make\n> parallel append generate a tidy stream of epoll_wait()/kevent() calls,\n> instead of a quadratic explosion of setup/teardown spam. I'll write\n> some more about those patches and hopefully propose them soon anyway,\n> but on reflection I don't really want that Unix efficiency problem to\n> be tangled up with this Windows correctness problem. That'd require a\n> programming rule that I don't want to burden us with forever: you'd\n> *never* be able to use a socket in more than one WaitEventSet, and\n> WaitLatchOrSocket() would have to be removed.\n\nWhich seems like a bad direction to go in.\n\n\n> 3. The real solution to this problem is to recognise that we just\n> have the event objects in the wrong place. WaitEventSets shouldn't\n> own them: they need to be 1:1 with sockets, or Winsock will eat\n> events. Likewise for the flag you need for edge->level conversion, or\n> *we'll* eat events. Having now tried that, it's starting to feel like\n> the best way forward, even though my initial prototype (see attached)\n> is maybe a tad cumbersome with bookkeeping. I believe it means that\n> all existing coding patterns *should* now be safe (not yet confirmed\n> by testing), and we're free to put sockets in multiple WESes even at\n> the same time if the need arises.\n\nAgreed.\n\n\n> The basic question is: how should a socket user find the associated\n> event handle and flags? Some answers:\n> \n> 1. \"pgsocket\" could become a pointer to a heap-allocated wrapper\n> object containing { socket, event, flags } on Windows, or something\n> like that, but that seems a bit invasive and tangled up with public\n> APIs like libpq, which put me off trying that. I'm willing to explore\n> it if people object to my other idea.\n\nI'm not sure if the libpq aspect really is a problem. We're not going to have\nto do that conversion repeatedly, I think.\n\n\n> 2. \"pgsocket\" could stay unchanged, but we could have a parallel\n> array with extra socket state, indexed by file descriptor. We could\n> use new socket()/close() libpq events so that libpq's sockets could be\n> registered in this scheme without libpq itself having to know anything\n> about that. That worked pretty nicely when I developed it on my\n> FreeBSD box, but on Windows I soon learned that SOCKET is really yet\n> another name for HANDLE, so it's not a dense number space anchored at\n> 0 like Unix file descriptors. The array could be prohibitively big.\n\nYes, that seems like a no-go. It also doesn't seem like it'd gain much in the\nrobustness department over 1) - you'd not know if a socket had been closed and\na new one with the same integer value had been created.\n\n\n> 3. I tried the same as #2 but with a hash table, and ran into another\n> small problem when putting it all together: we probably don't want to\n> longjump out of libpq callbacks on allocation failure. So, I modified\n> simplehash to add a no-OOM behaviour. That's the POC patch set I'm\n> attaching for show-and-tell. Some notes and TODOs in the commit\n> messages and comments.\n\n1) seems more plausible to me, but I can see this working as well.\n\n\n> From bdd90aeb65d82ecae8fe58b441d25a1e1b129bf3 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Sat, 29 Jan 2022 02:15:10 +1300\n> Subject: [PATCH 1/3] Add low level socket events for libpq.\n> \n> Provide a way to get a callback when a socket is created or closed.\n> \n> XXX TODO handle callback failure\n> XXX TODO investigate overheads/other implications of having a callback\n> installed\n\nWhat do we need this for? I still don't understand what kind of reconnects we\nneed to automatically need to detect.\n\n\n> +#ifdef SH_RAW_ALLOCATOR_NOZERO\n> +\tmemset(tb, 0, sizeof(SH_TYPE));\n> +#endif\n...\n> +#ifdef SH_RAW_ALLOCATOR_NOZERO\n> +\tmemset(newdata, 0, sizeof(SH_ELEMENT_TYPE) * newsize);\n> +#endif\n\nSeems like this should be handled in an allocator wrapper, rather than in\nmultiple places in the simplehash code?\n\n\n> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n> +static socktab_hash *socket_table;\n> +#endif\n\nPerhaps a separate #define for this would be appropriate? So we don't have to\nspell the exact condition out every time.\n\n\n\n> +ExtraSocketState *\n> +SocketTableAdd(pgsocket sock, bool no_oom)\n> +{\n> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n> +\tSocketTableEntry *ste;\n> +\tExtraSocketState *ess;\n\nGiven there's goto targets that test for ess != NULL, it seems nicer to\ninitialize it to NULL. I don't think there's problematic paths right now, but\nit seems unnecessary to \"risk\" that changing over time.\n\n\n> +#if !defined(FRONTEND)\n> +struct ExtraSocketState\n> +{\n> +#ifdef WIN32\n> +\tHANDLE\t\tevent_handle;\t\t/* one event for the life of the socket */\n> +\tint\t\t\tflags;\t\t\t\t/* most recent WSAEventSelect() flags */\n> +\tbool\t\tseen_fd_close;\t\t/* has FD_CLOSE been received? */\n> +#else\n> +\tint\t\t\tdummy;\t\t\t\t/* none of this is needed for Unix */\n> +#endif\n> +};\n\nSeems like we might want to track more readiness events than just close? If we\ne.g. started tracking whether we've seen writes blocking / write readiness,\nwe could get rid of cruft like\n\n\t\t/*\n\t\t * Windows does not guarantee to log an FD_WRITE network event\n\t\t * indicating that more data can be sent unless the previous send()\n\t\t * failed with WSAEWOULDBLOCK. While our caller might well have made\n\t\t * such a call, we cannot assume that here. Therefore, if waiting for\n\t\t * write-ready, force the issue by doing a dummy send(). If the dummy\n\t\t * send() succeeds, assume that the socket is in fact write-ready, and\n\t\t * return immediately. Also, if it fails with something other than\n\t\t * WSAEWOULDBLOCK, return a write-ready indication to let our caller\n\t\t * deal with the error condition.\n\t\t */\n\nthat seems likely to just make bugs less likely, rather than actually fix them...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 1 Feb 2022 09:38:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 6:38 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-01 18:02:34 +1300, Thomas Munro wrote:\n> > 1. \"pgsocket\" could become a pointer to a heap-allocated wrapper\n> > object containing { socket, event, flags } on Windows, or something\n> > like that, but that seems a bit invasive and tangled up with public\n> > APIs like libpq, which put me off trying that. I'm willing to explore\n> > it if people object to my other idea.\n>\n> I'm not sure if the libpq aspect really is a problem. We're not going to have\n> to do that conversion repeatedly, I think.\n\nAlright, I'm prototyping that variant today.\n\n> > Provide a way to get a callback when a socket is created or closed.\n> >\n> > XXX TODO handle callback failure\n> > XXX TODO investigate overheads/other implications of having a callback\n> > installed\n>\n> What do we need this for? I still don't understand what kind of reconnects we\n> need to automatically need to detect.\n\nlibpq makes new sockets in various cases like when trying multiple\nhosts/ports (the easiest test to set up) or in some SSL and GSSAPI\ncases. In the model shown in the most recent patch where there is a\nhash table holding ExtraSocketState objects that libpq doesn't even\nknow about, we have to do SocketTableDrop(old socket),\nSocketTableAdd(new socket) at those times, which is why I introduced\nthat callback.\n\nIf we switch to the model where a socket is really a pointer to a\nwrapper struct (which I'm about to prototype), the need for all that\nbookkeeping goes away, no callbacks, no hash table, but now libpq has\nto participate knowingly in a socket wrapping scheme to help the\nbackend while also somehow providing unwrapped SOCKET for client API\nstability. Trying some ideas, more on that soon.\n\n> > +#if !defined(FRONTEND)\n> > +struct ExtraSocketState\n> > +{\n> > +#ifdef WIN32\n> > + HANDLE event_handle; /* one event for the life of the socket */\n> > + int flags; /* most recent WSAEventSelect() flags */\n> > + bool seen_fd_close; /* has FD_CLOSE been received? */\n> > +#else\n> > + int dummy; /* none of this is needed for Unix */\n> > +#endif\n> > +};\n>\n> Seems like we might want to track more readiness events than just close? If we\n> e.g. started tracking whether we've seen writes blocking / write readiness,\n> we could get rid of cruft like\n>\n> /*\n> * Windows does not guarantee to log an FD_WRITE network event\n> * indicating that more data can be sent unless the previous send()\n> * failed with WSAEWOULDBLOCK. While our caller might well have made\n> * such a call, we cannot assume that here. Therefore, if waiting for\n> * write-ready, force the issue by doing a dummy send(). If the dummy\n> * send() succeeds, assume that the socket is in fact write-ready, and\n> * return immediately. Also, if it fails with something other than\n> * WSAEWOULDBLOCK, return a write-ready indication to let our caller\n> * deal with the error condition.\n> */\n>\n> that seems likely to just make bugs less likely, rather than actually fix them...\n\nYeah. Unlike FD_CLOSE, FD_WRITE is a non-terminal condition so would\nalso need to be cleared, which requires seeing all\nsend()/sendto()/write() calls with wrapper functions, but we already\ndo stuff like that. Looking into it...\n\n\n", "msg_date": "Wed, 2 Feb 2022 09:15:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOver in another thread I made some wild unsubstantiated guesses that the\nwindows issues could have been made much more likely by a somewhat odd bit of\ncode in PQisBusy():\n\nhttps://postgr.es/m/1959196.1644544971%40sss.pgh.pa.us\n\nAlexander, any chance you'd try if that changes the likelihood of the problem\noccurring, without any other fixes / reverts applied?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 18:22:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hello Andres,\n11.02.2022 05:22, Andres Freund wrote:\n> Over in another thread I made some wild unsubstantiated guesses that the\n> windows issues could have been made much more likely by a somewhat odd bit of\n> code in PQisBusy():\n>\n> https://postgr.es/m/1959196.1644544971%40sss.pgh.pa.us\n>\n> Alexander, any chance you'd try if that changes the likelihood of the problem\n> occurring, without any other fixes / reverts applied?\nUnfortunately I haven't seen an improvement for the test in question.\nWith the PQisBusy-fix.patch from [1] and without any other changes on\nthe master branch (52377bb8) it still fails (on iterations 13, 5, 2, 2\nfor me).\nThe diagnostic logging (in attachment) added:\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_receive: PQgetCopyData\nreturned 0\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_receive: PQgetCopyData\n2 returned -1\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_receive:\nend-of-streaming or error: -1\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_PQgetResult:\nstreamConn->asyncStatus: 1 && streamConn->status: 0\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_receive\nlibpqrcv_PQgetResult returned 10551584, 1\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_receive\nlibpqrcv_PQgetResult PGRES_COMMAND_OK\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_PQgetResult:\nstreamConn->asyncStatus: 1 && streamConn->status: 0\n2022-02-12 01:04:16.341 PST [4912] LOG:  libpqrcv_PQgetResult loop\nbefore WaitLatchOrSocket\n2022-02-12 01:04:16.341 PST [4912] LOG:  WSAEventSelect event->fd: 948,\nflags: 21\n2022-02-12 01:04:16.341 PST [4912] LOG:  WaitLatchOrSocket before\nWaitEventSetWait\n2022-02-12 01:04:16.341 PST [4912] LOG:  WaitEventSetWait before\nWaitEventSetWaitBlock\n2022-02-12 01:04:16.341 PST [4912] LOG:  WaitEventSetWaitBlock before\nWaitForMultipleObjects: 3\n...\nshows that before the doomed WaitForMultipleObjects() call the field\nconn->status is 0 (CONNECTION_OK).\n\n[1] https://www.postgresql.org/message-id/2187263.1644616494%40sss.pgh.pa.us\n\nBest regards,\nAlexander", "msg_date": "Sat, 12 Feb 2022 13:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 11.02.2022 05:22, Andres Freund wrote:\n>> Over in another thread I made some wild unsubstantiated guesses that the\n>> windows issues could have been made much more likely by a somewhat odd bit of\n>> code in PQisBusy():\n>> https://postgr.es/m/1959196.1644544971%40sss.pgh.pa.us\n>> Alexander, any chance you'd try if that changes the likelihood of the problem\n>> occurring, without any other fixes / reverts applied?\n\n> Unfortunately I haven't seen an improvement for the test in question.\n\nYeah, that's what I expected, sadly. While I think this PQisBusy behavior\nis definitely a bug, it will not lead to an infinite loop, just to write\nfailures being reported in a less convenient fashion than intended.\n\nI wonder whether it would help to put a PQconsumeInput call *before*\nthe PQisBusy loop, so that any pre-existing EOF condition will be\ndetected. If you don't like duplicating code, we could restructure\nthe loop as\n\n for (;;)\n {\n int rc;\n\n /* Consume whatever data is available from the socket */\n if (PQconsumeInput(streamConn) == 0)\n {\n /* trouble; return NULL */\n return NULL;\n }\n\n /* Done? */\n if (!PQisBusy(streamConn))\n break;\n\n /* Wait for more data */\n rc = WaitLatchOrSocket(MyLatch,\n WL_EXIT_ON_PM_DEATH | WL_SOCKET_READABLE |\n WL_LATCH_SET,\n PQsocket(streamConn),\n 0,\n WAIT_EVENT_LIBPQWALRECEIVER_RECEIVE);\n\n /* Interrupted? */\n if (rc & WL_LATCH_SET)\n {\n ResetLatch(MyLatch);\n ProcessWalRcvInterrupts();\n }\n }\n\n /* Now we can collect and return the next PGresult */\n return PQgetResult(streamConn);\n\n\nIn combination with the PQisBusy fix, this might actually help ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 11:47:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 11:47:20 -0500, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n> > 11.02.2022 05:22, Andres Freund wrote:\n> >> Over in another thread I made some wild unsubstantiated guesses that the\n> >> windows issues could have been made much more likely by a somewhat odd bit of\n> >> code in PQisBusy():\n> >> https://postgr.es/m/1959196.1644544971%40sss.pgh.pa.us\n> >> Alexander, any chance you'd try if that changes the likelihood of the problem\n> >> occurring, without any other fixes / reverts applied?\n> \n> > Unfortunately I haven't seen an improvement for the test in question.\n\nThanks for testing!\n\n\n> Yeah, that's what I expected, sadly. While I think this PQisBusy behavior\n> is definitely a bug, it will not lead to an infinite loop, just to write\n> failures being reported in a less convenient fashion than intended.\n\nFWIW, I didn't think it'd end up looping indefinitely, but that there's a\nchance it could end up waiting indefinitely. The WaitLatchOrSocket() doesn't\nhave a timeout, and if I understand the windows FD_CLOSE stuff correctly,\nyou're not guaranteed to get an event if you do WaitForMultipleObjects if\nFD_CLOSE was already consumed and if there isn't any data to read.\n\n\nISTM that it's not a great idea for libpqrcv_receive() to do blocking IO at\nall. The caller expects it to not block...\n\n\n> I wonder whether it would help to put a PQconsumeInput call *before*\n> the PQisBusy loop, so that any pre-existing EOF condition will be\n> detected. If you don't like duplicating code, we could restructure\n> the loop as\n\nThat does look a bit saner. Even leaving EOF and windows issues aside, it\nseems weird to do a WaitLatchOrSocket() without having tried to read more\ndata.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 17:53:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 04:25:27PM -0500, Tom Lane wrote:\n> Apropos of that, it's worth noting that wait_for_catchup *is*\n> dependent on up-to-date stats, and here's a recent run where\n> it sure looks like the timeout cause is AWOL stats collector:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-01-10%2004%3A51%3A34\n> \n> I wonder if we should refactor wait_for_catchup to probe the\n> standby directly instead of relying on the upstream's view.\n\nIt would be nice. For logical replication tests, do we have a monitoring API\nindependent of the stats collector? If not and we don't want to add one, a\nhacky alternative might be for wait_for_catchup to run a WAL-writing command\nevery ~20s. That way, if the stats collector misses the datagram about the\nstandby reaching a certain LSN, the stats collector would have more chances.\n\n\n", "msg_date": "Sat, 19 Mar 2022 01:47:04 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "I have a new socket abstraction patch that should address the known\nWindows socket/event bugs, but it's a little bigger than I thought it\nwould be, not quite ready, and now too late to expect people to review\nfor 15, so I think it should go into the next cycle. I've bounced\nhttps://commitfest.postgresql.org/37/3523/ into the next CF. We'll\nneed to do something like 75674c7e for master.\n\n\n", "msg_date": "Tue, 22 Mar 2022 15:16:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I have a new socket abstraction patch that should address the known\n> Windows socket/event bugs, but it's a little bigger than I thought it\n> would be, not quite ready, and now too late to expect people to review\n> for 15, so I think it should go into the next cycle. I've bounced\n> https://commitfest.postgresql.org/37/3523/ into the next CF. We'll\n> need to do something like 75674c7e for master.\n\nOK. You want me to push 75674c7e to HEAD?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Mar 2022 23:13:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Tue, Mar 22, 2022 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I have a new socket abstraction patch that should address the known\n> > Windows socket/event bugs, but it's a little bigger than I thought it\n> > would be, not quite ready, and now too late to expect people to review\n> > for 15, so I think it should go into the next cycle. I've bounced\n> > https://commitfest.postgresql.org/37/3523/ into the next CF. We'll\n> > need to do something like 75674c7e for master.\n>\n> OK. You want me to push 75674c7e to HEAD?\n\nThanks, yes, please do.\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:24:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Mar 22, 2022 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OK. You want me to push 75674c7e to HEAD?\n\n> Thanks, yes, please do.\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Mar 2022 10:19:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Here is a new attempt to fix this mess. Disclaimer: this based\nentirely on reading the manual and vicariously hacking a computer I\ndon't have via CI.\n\nThe two basic ideas are:\n\n * keep per-socket event handles in a hash table\n * add our own level-triggered event memory\n\nThe socket table entries are reference counted, and exist as long as\nthe socket is currently in at least one WaitEventSet. When creating a\nnew entry, extra polling logic re-checks the initial level-triggered\nstate (an overhead that we had in an ad-hoc way already, and that can\nbe avoided by more widespread use of long lived WaitEventSet). You\nare not allowed to close a socket while it's in a WaitEventSet,\nbecause then a new socket could be allocated with the same number and\nchaos would ensue. For example, if we revive the idea of hooking\nlibpq connections up to long-lived WaitEventSets, we'll probably need\nto invent a libpq event callback that says 'I am going to close socket\nX!', so you have a chance to remove the socket from any WaitEventSet\n*before* it's closed, to maintain that invariant. Other lazier ideas\nare possible, but probably become impossible in a hypothetical\nmulti-threaded future.\n\nWith these changes, AFAIK it should be safe to reinstate graceful\nsocket shutdowns, to fix the field complaints about FATAL error\nmessages being eaten by a grue and the annoying random CI/BF failures.\n\nHere are some other ideas that I considered but rejected for now:\n\n1. We could throw the WAIT_USE_WIN32 code away, and hack\nWAIT_USE_POLL to use WSAPoll() on Windows; we could create a\n'self-pipe' using a pair of connected AF_UNIX sockets to implement\nlatches and fake signals. It seems like a lot of work, and makes\nlatches a bit worse (instead of \"everything is an event!\" we have\n\"everything is a socket!\" with a helper thread, and we don't even have\nsocketpair() on this OS). Blah.\n\n2. We could figure out how to do fancy asynchronous sockets and IOCP.\nThat's how NT really wants to talk to the world, it doesn't really\nwant to pretend to be Unix. I expect that is where we'll get to\neventually but it's a much bigger cross-platform R&D job.\n\n3. Maybe there is a kind of partial step towards idea 2 that Andres\nmentioned on another thread somewhere: one could use an IOCP, and then\nuse event callbacks that run on system threads to post IOCP messages\n(a bit like we do for our fake waitpid()).\n\nWhat I have here is the simplest way I could see to patch up what we\nalready have, with the idea that in the fullness of time we'll\neventually get around to idea 2, once someone is ready to do the\npress-ups.\n\nReview/poking-with-a-stick/trying-to-break-it most welcome.", "msg_date": "Fri, 10 Nov 2023 16:31:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "Hello Thomas,\n\n10.11.2023 06:31, Thomas Munro wrote:\n> Here is a new attempt to fix this mess. Disclaimer: this based\n> entirely on reading the manual and vicariously hacking a computer I\n> don't have via CI.\n\nAs it also might (and I would like it to) be the final attempt, I decided\nto gather information and all the cases that we had on this topic.\nAt least, for the last 5 years we've seen:\n\n[1] 2019-01-18: Re: BUG #15598: PostgreSQL Error Code is not reported when connection terminated due to \nidle-in-transaction timeout\n     test 099_case_15598 made (in attachment)\nno commit\n\n[2] 2019-01-22: Rare SSL failures on eelpout\n     references [1]\n     test 099_rare_ssl_failures made (in attachment)\ncommit 2019-03-19 1f39a1c06: Restructure libpq's hqandling of send failures.\n\n[3] 2019-11-28: pgsql: Add tests for '-f' option in dropdb utility.\n     test 051_dropdb_force was proposed (included in the attachment)\ncommit 2019-11-30 98a9b37ba: Revert commits 290acac92b and 8a7e9e9dad.\n\n[4] 2019-12-06: closesocket behavior in different platforms\n     references [1], [2], [3]; a documentation change proposed\nno commit\n\n[5] 2020-06-03: libpq copy error handling busted\n     test 099_pgbench_with_server_off made (in attachment)\ncommit 2020-06-07 7247e243a: Try to read data from the socket in pqSendSome's write_failed paths. (a fix for 1f39a1c06)\n\n[6] 2020-10-19: BUG #16678: The ecpg connect/test5 test sometimes fails on Windows\nno commit\n\n[7] 2021-11-17: Windows: Wrong error message at connection termination\n    references [6]\ncommit: 2021-12-02 6051857fc: On Windows, close the client socket explicitly during backend shutdown.\n\n[8] 2021-12-05 17:03:18: MSVC SSL test failure\ncommit: 2021-12-07 ed52c3707: On Windows, also call shutdown() while closing the client socket.\n\n[9] 2021-12-30: Why is src/test/modules/committs/t/002_standby.pl flaky?\n    additional test 099_postgres_fdw_disconnect made (in attachment)\ncommit 2022-01-26 75674c7ec: Revert \"graceful shutdown\" changes for Windows, in back branches only. (REL_14_STABLE)\ncommit 2022-03-22 29992a6a5: Revert \"graceful shutdown\" changes for Windows. (master)\n\n[10] 2022-02-02 19:19:22: BUG #17391: While using --with-ssl=openssl and PG_TEST_EXTRA='ssl' options, SSL tests fail on \nOpenBSD 7.0\ncommit 2022-02-12 335fa5a26: Fix thinko in PQisBusy(). (a fix for 1f39a1c06)\ncommit 2022-02-12 faa189c93: Move libpq's write_failed mechanism down to pqsecure_raw_write(). (a fix for 1f39a1c06)\n\nAs it becomes difficult to test/check all those cases scattered around, I\ndecided to retell the whole story by means of tests. Please look at the\nscript win-sock-tests-01.cmd attached, which can be executed both on\nWindows (as regular cmd) and on Linux (bash win-sock-tests-01.cmd).\n\nAt the end of the script we can see several things.\nFirst, the last patchset you posted, applied to b282fa88d~1, fixes the\nissue discussed in this thread (it eliminates failures of\ncommit_ts_002_standby (and also 099_postgres_fdw_disconnect)).\n\nSecond, with the sleep added (see [6]), I had the same results of\n`meson test` on Windows and on Linux.\nNamely, there are some tests failing (see win-sock-tests-01.cmd) due to\nwalsender preventing server stop.\nI describe this issue separately (see details in walsender-cannot-exit.txt;\nmaybe it's worth to discuss it in a separate thread) as it's kind of\noff-topic. With the supplementary sleep() added to WalSndLoop(), the\ncomplete `meson test` passes successfully both on Windows and on Linux.\n\nThird, cases [1] and [3] are still broken, due to a Windows peculiarity.\nPlease see server.c and client.c attached, which demonstrate:\nCase \"no shutdown/closesocket\" on Windows:\nC:\\src>server\nListening for incoming connections...\n                         C:\\src>client\nClient connected: 127.0.0.1:64395\n                         Connection to server established. Enter message: msg\nClient message: msg\nSending message...\n                         Sleeping...\nExiting...\nC:\\src>\n                         Calling recv()...\n                         recv() failed\n\nCase \"no shutdown/closesocket\" on Linux:\n$ server\nListening for incoming connections...\n                         $ client\nClient connected: 127.0.0.1:33044\n                         Connection to server established. Enter message: msg\nClient message: msg\nSending message...\n                         Sleeping...\nExiting...\n$\n                         Calling recv()...\n                         Server message: MESSAGE\n\nCase \"shutdown/closesocket\" on Windows:\nC:\\src>server shutdown closesocket\nListening for incoming connections...\n                         C:\\src>client\nClient connected: 127.0.0.1:64396\n                         Connection to server established. Enter message: msg\nClient message: msg\nSending message...\n                         Sleeping...\nCalling shutdown()...\nCalling closesocket()...\nExiting...\nC:\\src>\n                         Calling recv()...\n                         Server message: MESSAGE\n\nThat's okay so far, but what makes cases [1]/[3] different from all cases\nin the whole existing test suite, which now performed successfully, is\nthat psql calls send() before recv() on a socket closed and abandoned by\nthe server.\nThose programs show on Windows:\nC:\\src>server shutdown closesocket\nListening for incoming connections...\n                         C:\\src>client send_before_recv\nClient connected: 127.0.0.1:64397\n                         Connection to server established. Enter message: msg\nClient message: msg\nSending message...\n                         Sleeping...\nCalling shutdown()...\nCalling closesocket()...\nExiting...\nC:\\src>\n                         send() returned 4\n                         Calling recv()...\n                         recv() failed\n\nAs known, on Linux the same scenario works just fine.\n\nFourth, tests 099_rare_ssl_failures (and 001_ssl_tests, though more rarely)\nfail for me with the latest patches (only on Windows again):\n...\n  8/10 postgresql:ssl_1 / ssl_1/099_rare_ssl_failures ERROR           141.34s   exit status 3\n...\n  9/10 postgresql:ssl_7 / ssl_7/099_rare_ssl_failures OK              142.52s   2000 subtests passed\n10/10 postgresql:ssl_6 / ssl_6/099_rare_ssl_failures OK              143.00s   2000 subtests passed\n\nOk:                 2\nExpected Fail:      0\nFail:               8\n\nssl_1\\099_rare_ssl_failures\\log\\regress_log_099_rare_ssl_failures.txt:\n...\niteration 354\n[20:57:06.984](0.106s) ok 707 - certificate authorization fails with revoked client cert with server-side CRL directory\n[20:57:06.984](0.000s) ok 708 - certificate authorization fails with revoked client cert with server-side CRL directory: \nmatches\niteration 355\n[20:57:07.156](0.172s) ok 709 - certificate authorization fails with revoked client cert with server-side CRL directory\n[20:57:07.156](0.001s) not ok 710 - certificate authorization fails with revoked client cert with server-side CRL \ndirectory: matches\n[20:57:07.159](0.003s) #   Failed test 'certificate authorization fails with revoked client cert with server-side CRL \ndirectory: matches'\n#   at .../src/test/ssl_1/t/099_rare_ssl_failures.pl line 88.\n[20:57:07.159](0.000s) #                   'psql: error: connection to server at \"127.0.0.1\", port 59843 failed: could \nnot receive data from server: Software caused connection abort (0x00002745/10053)\n# SSL SYSCALL error: Software caused connection abort (0x00002745/10053)'\n#     doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n...\n\nIt seems to me that it can have the same explanation (if openssl can call\nsend() before recv() under the hood), but maybe it should be investigated\nfurther.\n\n> Review/poking-with-a-stick/trying-to-break-it most welcome.\n\nI could not find anything suspicious in the code, except for maybe a typo\n\"The are ...\".\n\n[1] https://www.postgresql.org/message-id/flat/87k1iy44fd.fsf%40news-spur.riddles.org.uk#ba0c07f13c300d42fd537855dd95dd2b\n[2] https://www.postgresql.org/message-id/flat/CAEepm%3D2n6Nv%2B5tFfe8YnkUm1fXgvxR0Mm1FoD%2BQKG-vLNGLyKg%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/E1iaD8h-0004us-K9@gemulon.postgresql.org\n[4] \nhttps://www.postgresql.org/message-id/flat/CALDaNm2tEvr_Kum7SyvFn0%3D6H3P0P-Zkhnd%3DdkkX%2BQ%3DwKutZ%3DA%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/flat/20200603201242.ofvm4jztpqytwfye%40alap3.anarazel.de\n[6] https://www.postgresql.org/message-id/16678-253e48d34dc0c376@postgresql.org\n[7] https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de\n[8] https://www.postgresql.org/message-id/flat/af5e0bf3-6a61-bb97-6cba-061ddf22ff6b%40dunslane.net\n[9] \nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n[10] https://www.postgresql.org/message-id/flat/17391-304f81bcf724b58b%40postgresql.org\n\nBest regards,\nAlexander", "msg_date": "Tue, 21 Nov 2023 14:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Thu, Nov 9, 2023 at 10:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here is a new attempt to fix this mess. Disclaimer: this based\n> entirely on reading the manual and vicariously hacking a computer I\n> don't have via CI.\n\nI'd first like to congratulate this thread on reaching its second\nbirthday. The CommitFest entry hasn't quite made it to the two year\nmark yet - expect that in another month or so - but thread itself is\nover the line.\n\nRegarding 0001, I don't know if we really need SH_RAW_FREE. You can\njust define your own SH_FREE implementation in userspace. That doesn't\nwork for SH_RAW_ALLOCATOR because there's code in simplehash.h that\nknows about memory contexts apart from the actual definition of\nSH_ALLOCATE - e.g. we include a MemoryContext pointer in SH_TYPE, and\nin the signature of SH_CREATE. But SH_FREE doesn't seem to have any\nsimilar issues. Maybe it's still worth doing for convenience -- I\nhaven't thought about that very hard -- but it doesn't seem to be\nrequired in the same way that SH_RAW_ALLOCATOR was.\n\nI wonder whether we really want 0002. It seems like a pretty\nsignificant behavior change -- now everybody using simplehash has to\nworry about whether failure cases are possible. And maybe there's some\nperformance overhead. And most of the changes are restricted to the\nSH_RAW_ALLOCATOR case, but the changes to SH_GROW are not. And making\nthis contingent on SH_RAW_ALLOCATOR doesn't seem principled.\n\nI kind of wonder whether trying to handle OOM here is the wrong\ndirection to go. What if we just bail out hard if we can't insert into\nthe hash table? I think that we don't expect the hash table to ever be\nvery large (right?) and we don't install these kinds of defenses\neverywhere that OOM on a small memory allocation is a possibility (or\nat least I don't think we do). I'm actually sort of unclear about why\nyou're trying to force this to use raw malloc/free instead of\npalloc/pfree. Do we need to use this on the frontend side? Do we need\nit on the backend side prior to the memory context infrastructure\nbeing up?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 12:04:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there were CFbot test failures last time it was run [1]. Please\nhave a look and post an updated version if necessary.\n\n======\n1[] https://commitfest.postgresql.org/46/3523/\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/3523\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:26:09 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" }, { "msg_contents": "On Mon, 22 Jan 2024 at 09:56, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> like there were CFbot test failures last time it was run [1]. Please\n> have a look and post an updated version if necessary.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 15:04:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is src/test/modules/committs/t/002_standby.pl flaky?" } ]
[ { "msg_contents": "Attached are a couple of patches for loose ends that I didn't\nget to when I was working on pg_dump before the last CF.\n\n0001 removes all the \"username_subquery\" subqueries in favor\nof doing local username lookups. On the regression database\nwith no extra roles, it seems to be more or less a wash ...\nbut if I create 100 roles, then the patch seems to save five\nor ten percent compared to HEAD.\n\nI also got rid of the rather-pointless-IMO checks for pg_authid\njoin failures, in favor of having the lookup subroutine just\nfatal() if it doesn't find a match. I don't think we need to\nburden translators with all those strings for cases that shouldn't\nhappen. Note that a lot of object types weren't checking\nfor this condition anyway, making it even more pointless.\n\n0002 is a very small patch that gets rid of an extra subquery\nfor identity-sequence checking, realizing that the LEFT JOIN\nin the FROM clause will have picked up that row already,\nif it exists. This again saves a few percent for\n\"pg_dump -s regression\", though the effects would depend a lot\non how many sequences you have.\n\nThese don't seem complicated enough to require real review,\nso I plan to just push them, unless there are objections.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 30 Dec 2021 17:28:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "More pg_dump performance hacking" } ]
[ { "msg_contents": "Hi!\n\nI want to introduce the updated version of the libpq protocol compression patch, initially\nintroduced by Konstantin Knizhnik in this thread:\nhttps://www.postgresql.org/message-id/aad16e41-b3f9-e89d-fa57-fb4c694bec25@postgrespro.ru\n\nThe original thread became huge and it makes it hard for new people to catch up so I think it is better\nto open a new thread with the summary of the current state of the patch. \n\nCompression of libpq traffic is useful in:\n1. COPY\n2. Replication\n3. Queries returning large results sets (for example JSON) through slow connections.\n\nThe patch introduces three new protocol messages: CompressionAck, CompressedData, and\nSetCompressionMethod.\n\nHere is a brief overview of the compression initialization process:\n\n1. Compression can be requested by a client by including the \"compression\" option in its connection\nstring. This can either be a boolean value to enable or\ndisable compression or an explicit list of comma-separated compression algorithms which can\noptionally include compression level. The client indicates the compression request by sending the\n_pq_.compression startup packet\nparameter with a list of compression algorithms and an optional specification of compression level. \nIf the server does not support compression, the backend will ignore the _pq_.compression parameter\nand will not send the CompressionAck message to the frontend. \n\n2. Server receives the client's compression request and intersects the requested compression\nalgorithms with the allowed ones (controlled via the libpq_compression server config setting). If\nthe intersection is not empty, the server responds with CompressionAck containing the final list of\nthe compression algorithms that can be used for the compression of libpq messages between the client\nand server. If the intersection is empty (server does not accept any of the requested algorithms),\nthen it replies with CompressionAck containing the empty list and it is up to the client whether to\ncontinue without compression or to report an error. \n\n3. After sending the CompressionAck message, the server can send the SetCompressionMethod message to\nset the current compression algorithm for server-to-client traffic compression. Same for the client,\nafter receiving the CompressionAck message, the client can send the SetCompressionMethod message to set the current\ncompression algorithm for client-to-server traffic compression. Client-to-server and\nserver-to-client compression are independent of each other.\n\nTo compress messages, streaming compression is used. Compressed bytes are wrapped into the\nCompressedData protocol messages. One CompressedData message may contain multiple regular protocol\nmessages. CompressedData messages can be mixed with the regular uncompressed messages.\n\nCompression context is retained between the multiple CompressedData messages. Currently, only\nCopyData, DataRow, and Query types of messages with length more than 60 bytes are being compressed.\n\nIf the client (or server) wants to switch the current compression method, it sends the\nSetCompressionMethod message to the receiving side would be able to change its decompressor.\n\nI've separated the patch into two parts: first contains the main patch with ZLIB and LZ4 compressing\nalgorithms support, second adds the ZSTD support.\n\nThanks,\n\nDaniil Zakhlystov", "msg_date": "Fri, 31 Dec 2021 14:26:57 +0500", "msg_from": "Daniil Zakhlystov <usernamedt@yandex-team.ru>", "msg_from_op": true, "msg_subject": "libpq compression (part 2)" }, { "msg_contents": "Thanks for working on this. The patch looks to be in good shape - I hope more\npeople will help to review and test it. I took the liberty of creating a new\nCF entry. http://cfbot.cputube.org/daniil-zakhlystov.html\n\n+zpq_should_compress(ZpqStream * zpq, char msg_type, uint32 msg_len)\n+{\n+ return zpq_choose_compressor(zpq, msg_type, msg_len) == -1;\n\nI think this is backwards , and should say != -1 ?\n\nAs written, the server GUC libpq_compression defaults to \"on\", and the client\ndoesn't request compression. I think the server GUC should default to off.\nI failed to convince Kontantin about this last year. The reason is that 1)\nit's a new feature; 2) with security implications. An admin should need to\n\"opt in\" to this. I still wonder if this should be controlled by a new \"TYPE\"\nin pg_hba (rather than a GUC); that would make it exclusive of SSL.\n\nHowever, I also think that in the development patches both client and server\nshould enable compression by default, to allow it to be exercized by cfbot.\nFor the other compression patches I worked on, I had an 0001 commit where the\nfeature default was off, plus following patches which enabled the feature by\ndefault - only for cfbot and not intended for commit.\n\n+/* GUC variable containing the allowed compression algorithms list (separated by comma) */\n+char *libpq_compress_algorithms;\n\n=> The global variable is conventionally initialized to the same default as\nguc.c (even though the GUC default value will be applied at startup).\n\n+ * - NegotiateProtocolVersion in cases when server does not support protocol compression\n+ * Anything else probably means it's not Postgres on the other end at all.\n */\n- if (!(beresp == 'R' || beresp == 'E'))\n+ if (!(beresp == 'R' || beresp == 'E' || beresp == 'z' || beresp == 'v'))\n\n=> I think NegotiateProtocolVersion and 'v' are from an old version of the\npatch and no longer used ?\n\nThere's a handful of compiler warnings:\n| z_stream.c:311:5: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n\n+ ereport(LOG, (errmsg(\"failed to parse configured compression setting: %s\", libpq_compress_algorithms)));\n+ return 0;\n\n=> Since March, errmsg doesn't need extra parenthesis around it (e3a87b4).\n\n+ * Report current transaction start timestamp as the specified value.\n+ * Zero means there is no active transaction.\n\n=> The comment doesn't correspond to the function (copy+paste).\n\n+ return id >= 0 && id < (sizeof(zs_algorithms) / sizeof(*zs_algorithms));\n+ size_t n_algorithms = sizeof(zs_algorithms) / sizeof(*zs_algorithms);\n\n=> You can use lengthof() for these.\n\nMaybe pg_stat_network_traffic should show the compression?\nIt'd have to show the current client-server and server-client values.\nOr maybe that's not useful since it can change dynamically (unless it were\nreset when the compression method was changed).\n\nI think the psql conninfo Compressor/Decompressor line may be confusing.\nMaybe it should talk about Client->Server and Server->Client.\nMaybe it should be displayed on a single line. \n\nActually, right now the \\conninfo part is hardly useful, since the compressor\nis allocated lazily/\"on demand\", so it shows the compression of some previous\ncommand, but maybe not the last command, and maybe not the next command...\n\nI'm not sure it's needed to allow the compression to be changed dynamically,\nand the generality to support a different compression mthod for each libpq\nmessage type seems excessive. Maybe it's enough to check that the message type\nis one of VALID_LONG_MESSAGE_TYPE and its length is long enough.\n\nI wonder whether the asymmetric compression idea is useful. The only\napplication I can see for this is that a server might allow to *decompress*\ndata from a client, but may not want to *compress* data to a client.\n\nWhat happenes if the compression doesn't decrease the message size ? I don't\nsee anything that allows sending the original, raw data. The advantage would\nbe that the remote side doesn't incur the overhead of decompression.\n\nI hit this assertion with PGCOMPRESSION=lz4 (but not zlib), and haven't yet\nfound the cause.\n\nTRAP: FailedAssertion(\"decBytes > 0\", File: \"z_stream.c\", Line: 315, PID: 21180)\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(ExceptionalCondition+0x99)[0x5642137806d9]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(+0x632bfe)[0x5642137d4bfe]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(zs_read+0x2b)[0x5642137d4ebb]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(zpq_read+0x192)[0x5642137d1172]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(+0x378b8a)[0x56421351ab8a]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(pq_getbyte+0x1f)[0x56421351c12f]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(PostgresMain+0xf96)[0x56421365fcc6]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(+0x428b69)[0x5642135cab69]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(PostmasterMain+0xd1c)[0x5642135cbbac]\npostgres: pryzbyj postgres 127.0.0.1(46154) idle(main+0x220)[0x5642132f9a40]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7fb70ef45b97]\n\nMaybe you should reset the streams between each compression message (even if\nit's using the same compression algorithm). This might allow better\ncompression. You could either unconditionally call zs_compressor_free()/\nzs_create_compressor(). Or, add a new, optional interface for resetting the\nstream (and if that isn't set for a given compression method, then free+create\nthe stream). For LZ4, that'd be LZ4_resetStream_fast(), but only for\nv1.9.0+...\n\nSome minor formatting issues:\n - There are spaces rather than tabs in a few files; you can maybe fix it by\n piping the file through unexpand -t4.\n - pointers in function declarations should have no space after the stars:\n +zs_buffered(ZStream * zs)\n - There's a few extraneous whitespace changes:\n ConfigureNamesEnum, pq_buffer_has_data, libpq-int.h.\n - Some lines are too long:\n git log -2 -U1 '*.[ch]' |grep -E '.{80}|^diff' |less\n - Some 1-line \"if\" statements would be easier to read without braces {}.\n - Some multi-line comments should have \"*/\" on a separate line:\n git log -3 -p |grep -E '^\\+ \\*.*\\*\\/'\n git log -3 -p '*.c' |grep -E '^\\+[^/]*\\*.*\\*\\/'\n\n-- \nJustin", "msg_date": "Sat, 1 Jan 2022 11:25:05 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "\n> Maybe you should reset the streams between each compression message (even if\n> it's using the same compression algorithm). This might allow better\n> compression.\n\nAFAIK on the contrary - longer data sequence usually compresses better. The codec can use knowledge about prior data to better compress current bytes.\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Mon, 03 Jan 2022 16:16:55 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Sat, Jan 1, 2022 at 11:25:05AM -0600, Justin Pryzby wrote:\n> Thanks for working on this. The patch looks to be in good shape - I hope more\n> people will help to review and test it. I took the liberty of creating a new\n> CF entry. http://cfbot.cputube.org/daniil-zakhlystov.html\n> \n> +zpq_should_compress(ZpqStream * zpq, char msg_type, uint32 msg_len)\n> +{\n> + return zpq_choose_compressor(zpq, msg_type, msg_len) == -1;\n> \n> I think this is backwards , and should say != -1 ?\n> \n> As written, the server GUC libpq_compression defaults to \"on\", and the client\n> doesn't request compression. I think the server GUC should default to off.\n> I failed to convince Kontantin about this last year. The reason is that 1)\n> it's a new feature; 2) with security implications. An admin should need to\n> \"opt in\" to this. I still wonder if this should be controlled by a new \"TYPE\"\n> in pg_hba (rather than a GUC); that would make it exclusive of SSL.\n\nI assume this compression happens before it is encrypted for TLS\ntransport. Second, compression was removed from TLS because there were\ntoo many ways for HTTP to weaken encryption. I assume the Postgres wire\nprotocol doesn't have similar exploit possibilities.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 7 Jan 2022 13:46:00 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Fri, Jan 07, 2022 at 01:46:00PM -0500, Bruce Momjian wrote:\n> On Sat, Jan 1, 2022 at 11:25:05AM -0600, Justin Pryzby wrote:\n> > Thanks for working on this. The patch looks to be in good shape - I hope more\n> > people will help to review and test it. I took the liberty of creating a new\n> > CF entry. http://cfbot.cputube.org/daniil-zakhlystov.html\n> > \n> > +zpq_should_compress(ZpqStream * zpq, char msg_type, uint32 msg_len)\n> > +{\n> > + return zpq_choose_compressor(zpq, msg_type, msg_len) == -1;\n> > \n> > I think this is backwards , and should say != -1 ?\n> > \n> > As written, the server GUC libpq_compression defaults to \"on\", and the client\n> > doesn't request compression. I think the server GUC should default to off.\n> > I failed to convince Kontantin about this last year. The reason is that 1)\n> > it's a new feature; 2) with security implications. An admin should need to\n> > \"opt in\" to this. I still wonder if this should be controlled by a new \"TYPE\"\n> > in pg_hba (rather than a GUC); that would make it exclusive of SSL.\n> \n> I assume this compression happens before it is encrypted for TLS\n> transport. Second, compression was removed from TLS because there were\n> too many ways for HTTP to weaken encryption. I assume the Postgres wire\n> protocol doesn't have similar exploit possibilities.\n\nIt's discussed in last year's thread. The thinking is that there tends to be\n*fewer* exploitable opportunities between application->DB than between\nbrowser->app.\n\nBut it's still a known concern, and should default to off - as I said.\n\nThat's also why I wondered if compression should be controlled by pg_hba,\nrather than a GUC. To require/allow an DBA to opt-in to it for specific hosts.\nOr to make it exclusive of ssl. We could choose to not suppose that case at\nall, or (depending on the implement) refuse that combination of layers.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 7 Jan 2022 13:21:10 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Fri, Jan 07, 2022 at 01:46:00PM -0500, Bruce Momjian wrote:\n> > On Sat, Jan 1, 2022 at 11:25:05AM -0600, Justin Pryzby wrote:\n> > > Thanks for working on this. The patch looks to be in good shape - I hope more\n> > > people will help to review and test it. I took the liberty of creating a new\n> > > CF entry. http://cfbot.cputube.org/daniil-zakhlystov.html\n> > > \n> > > +zpq_should_compress(ZpqStream * zpq, char msg_type, uint32 msg_len)\n> > > +{\n> > > + return zpq_choose_compressor(zpq, msg_type, msg_len) == -1;\n> > > \n> > > I think this is backwards , and should say != -1 ?\n> > > \n> > > As written, the server GUC libpq_compression defaults to \"on\", and the client\n> > > doesn't request compression. I think the server GUC should default to off.\n> > > I failed to convince Kontantin about this last year. The reason is that 1)\n> > > it's a new feature; 2) with security implications. An admin should need to\n> > > \"opt in\" to this. I still wonder if this should be controlled by a new \"TYPE\"\n> > > in pg_hba (rather than a GUC); that would make it exclusive of SSL.\n> > \n> > I assume this compression happens before it is encrypted for TLS\n> > transport. Second, compression was removed from TLS because there were\n> > too many ways for HTTP to weaken encryption. I assume the Postgres wire\n> > protocol doesn't have similar exploit possibilities.\n> \n> It's discussed in last year's thread. The thinking is that there tends to be\n> *fewer* exploitable opportunities between application->DB than between\n> browser->app.\n\nYes, this was discussed previously and addressed.\n\n> But it's still a known concern, and should default to off - as I said.\n\nI'm not entirely convinced of this but also am happy enough as long as\nthe capability exists, no matter if it's off or on by default.\n\n> That's also why I wondered if compression should be controlled by pg_hba,\n> rather than a GUC. To require/allow an DBA to opt-in to it for specific hosts.\n> Or to make it exclusive of ssl. We could choose to not suppose that case at\n> all, or (depending on the implement) refuse that combination of layers.\n\nI'm definitely against us deciding that we know better than admins if\nthis is an acceptable trade-off in their environment, or not. Letting\nusers/admins control it is fine, but I don't feel we should forbid it.\n\nAs for the details of how we allow control over it, I suppose there's a\nnumber of options. Having it in the HBA doesn't seem terrible, though I\nsuspect most will just want to enable it across the board and having to\nhave \"compression=allowed\" or whatever added to every hba line seems\nlikely to be annoying. Maybe a global GUC and then allow the hba to\noverride?\n\nThanks,\n\nStephen", "msg_date": "Fri, 7 Jan 2022 15:56:39 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Fri, Jan 7, 2022 at 03:56:39PM -0500, Stephen Frost wrote:\n> As for the details of how we allow control over it, I suppose there's a\n> number of options. Having it in the HBA doesn't seem terrible, though I\n> suspect most will just want to enable it across the board and having to\n> have \"compression=allowed\" or whatever added to every hba line seems\n> likely to be annoying. Maybe a global GUC and then allow the hba to\n> override?\n\nEwe, I would like to avoid going in the GUC & pg_hba.conf direction,\nunless we have other cases where we already do this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 7 Jan 2022 16:09:37 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Jan 7, 2022 at 03:56:39PM -0500, Stephen Frost wrote:\n> > As for the details of how we allow control over it, I suppose there's a\n> > number of options. Having it in the HBA doesn't seem terrible, though I\n> > suspect most will just want to enable it across the board and having to\n> > have \"compression=allowed\" or whatever added to every hba line seems\n> > likely to be annoying. Maybe a global GUC and then allow the hba to\n> > override?\n> \n> Ewe, I would like to avoid going in the GUC & pg_hba.conf direction,\n> unless we have other cases where we already do this.\n\nI mean, not exactly the same, but ... ssl?\n\nThanks,\n\nStephen", "msg_date": "Fri, 7 Jan 2022 16:17:58 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "\n\n> 8 янв. 2022 г., в 01:56, Stephen Frost <sfrost@snowman.net> написал(а):\n>> \n>> It's discussed in last year's thread. The thinking is that there tends to be\n>> *fewer* exploitable opportunities between application->DB than between\n>> browser->app.\n> \n> Yes, this was discussed previously and addressed.\n\nWhat else do we need to decide architecturally to make protocol compression happen in 15? As far as I can see - only HBA\\GUC part.\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:38:55 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "zlib still causes check-world to get stuck. I first mentioned this last March:\n20210319062800.GI11765@telsasoft.com\n\nActually all the compression methods seems to get stuck with\ntime make check -C src/bin/pg_rewind\ntime make check -C src/test/isolation\n\nFor CI purposes, there should be an 0003 patch which enables compression by\ndefault, for all message types and maybe all lengths.\n\nI removed the thread from the CFBOT until that's resolved.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 12 Jan 2022 08:15:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Hi, Justin!\n\nFirst of all, thanks for the detailed review. I’ve applied your patches to the current version.\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> I wonder whether the asymmetric compression idea is useful. The only\n> application I can see for this is that a server might allow to *decompress*\n> data from a client, but may not want to *compress* data to a client.\n\nIn the current patch state, there is no option to forbid the server-to-client or client-to-server\ncompression only. The decision of whether to compress or not is based only on the message length. In\nthe future, we can implement more precise control without the need for any protocol changes.\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> I'm not sure it's needed to allow the compression to be changed dynamically,\n> and the generality to support a different compression mthod for each libpq\n> message type seems excessive. Maybe it's enough to check that the message type\n> is one of VALID_LONG_MESSAGE_TYPE and its length is long enough.\n\n\nYes, as stated above, in the current patch version protocol message type is ignored but can be potentially\nbe taken into account. All messages will be compressed using the first available compressor. For\nexample, if postgresql.conf contains “libpq_compression = zlib,zstd” and has \"compression = zlib,zstd”\nin its connection string, zlib will be used to compress all messages with length more than the threshold.\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> What happenes if the compression doesn't decrease the message size ? I don't\n> see anything that allows sending the original, raw data. The advantage would\n> be that the remote side doesn't incur the overhead of decompression.\n\nBecause of the streaming compression, we can’t just throw away some compressed data because future\ncompressed messages might back-reference it. There are two ways to solve this:\n1. Do not use the streaming compression \n2. Somehow reset the compression context to the state before the message has been compressed\n\nI’ll look into it. Also, if anyone has any thoughts on this, I’ll appreciate hearing your opinions.\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> I hit this assertion with PGCOMPRESSION=lz4 (but not zlib), and haven't yet\n> found the cause.\n\nThanks for reporting this. Looks like LZ4 implementation is not polished yet and needs some \nadditional effort. I'll look into it too.\n\n> On 12 Jan 2022, at 19:15, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> zlib still causes check-world to get stuck. I first mentioned this last March:\n> 20210319062800.GI11765@telsasoft.com\n> \n> Actually all the compression methods seems to get stuck with\n> time make check -C src/bin/pg_rewind\n> time make check -C src/test/isolation\n> \n> For CI purposes, there should be an 0003 patch which enables compression by\n> default, for all message types and maybe all lengths.\n> \n> I removed the thread from the CFBOT until that's resolved.\n\nI’ve fixed the failing tests, now they should pass.\n\nThanks,\n\nDaniil Zakhlystov", "msg_date": "Fri, 14 Jan 2022 02:12:17 +0500", "msg_from": "Daniil Zakhlystov <usernamedt@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Fri, Jan 14, 2022 at 02:12:17AM +0500, Daniil Zakhlystov wrote:\n> Hi, Justin!\n> \n> First of all, thanks for the detailed review. I’ve applied your patches to the current version.\n\nNote that my message had other comments that weren't addressed in this patch.\n\nYour 0003 patch has a couple \"noise\" hunks that get rid of ^M characters added\nin previous patches. The ^M shouldn't be added in the first place. Did you\napply my fixes using git-am or something else ?\n\nOn Fri, Jan 14, 2022 at 02:12:17AM +0500, Daniil Zakhlystov wrote:\n> > On 12 Jan 2022, at 19:15, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > \n> > zlib still causes check-world to get stuck. I first mentioned this last March:\n> > 20210319062800.GI11765@telsasoft.com\n> > \n...\n> > I removed the thread from the CFBOT until that's resolved.\n> \n> I’ve fixed the failing tests, now they should pass.\n\nmacos: passed\nlinux: timed out after 1hr\nfreebsd: failed in pg_rewind: ... <= replay_lsn AND state = 'streaming' FROM ...\nwindows: \"Failed test 'data_checksums=on is reported on an offline cluster stdout /(?^:^on$)/'\" / WARNING: 01000: algorithm zlib is not supported\n\nNote that it's possible and easy to kick off a CI run using any github account:\nsee ./src/tools/ci/README\n\nFor me, it's faster than running check-world -j4 locally, and runs tests on 4\nOSes.\n\nI re-ran your branch under my own account and linux didn't get stuck (and the\ncompiler warnings tests passed). But on a third attempt, macos failed the\npg_rewind test, and bsd failed the subscription test:\n| SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 Jan 2022 17:58:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Hi!\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> +/* GUC variable containing the allowed compression algorithms list (separated by comma) */\n> +char *libpq_compress_algorithms;\n> \n> => The global variable is conventionally initialized to the same default as\n> guc.c (even though the GUC default value will be applied at startup).\n\nI’ve updated libpq_compress_algorithms to match the corresponding guc.c setting value.\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> + * - NegotiateProtocolVersion in cases when server does not support protocol compression\n> + * Anything else probably means it's not Postgres on the other end at all.\n> */\n> - if (!(beresp == 'R' || beresp == 'E'))\n> + if (!(beresp == 'R' || beresp == 'E' || beresp == 'z' || beresp == 'v'))\n> \n> => I think NegotiateProtocolVersion and 'v' are from an old version of the\n> patch and no longer used ?\n\nNo, it is still relevant in the current patch version. I’ve added more comments regarding this case\nand also made more transparent compression negotiation behavior.\n\nIf the client requested the compression, but the server does not support the libpq compression feature\nbecause of the old Postgres version, the client will report an error and exit. If the server rejected\nthe client's compression request (for example, if libpq_compression is set to ‘off’ or no matching\nalgorithms found), the client will also report an error and exit.\n\n> On 14 Jan 2022, at 04:58, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> Your 0003 patch has a couple \"noise\" hunks that get rid of ^M characters added\n> in previous patches. The ^M shouldn't be added in the first place. Did you\n> apply my fixes using git-am or something else ?\n\nI’ve fixed the line endings and applied the pgindent. Now each patch should be OK.\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> => Since March, errmsg doesn't need extra parenthesis around it (e3a87b4).\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> + * Report current transaction start timestamp as the specified value.\n> + * Zero means there is no active transaction.\n> \n> => The comment doesn't correspond to the function\n\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> + return id >= 0 && id < (sizeof(zs_algorithms) / sizeof(*zs_algorithms));\n> + size_t n_algorithms = sizeof(zs_algorithms) / sizeof(*zs_algorithms);\n> \n> => You can use lengthof() for these\n\n\nThanks, I’ve updated the patch and corrected the highlighted issues.\n\n> On 1 Jan 2022, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> I think the psql conninfo Compressor/Decompressor line may be confusing.\n> Maybe it should talk about Client->Server and Server->Client.\n> Maybe it should be displayed on a single line. \n> \n> Actually, right now the \\conninfo part is hardly useful, since the compressor\n> is allocated lazily/\"on demand\", so it shows the compression of some previous\n> command, but maybe not the last command, and maybe not the next command…\n\nI’ve updated the \\conninfo part, now it shows the list of the negotiated compression algorithms. \n\nAlso, I’ve added the check for the compression option to the frontend side.\n\n> On 14 Jan 2022, at 04:58, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> macos: passed\n> linux: timed out after 1hr\n> freebsd: failed in pg_rewind: ... <= replay_lsn AND state = 'streaming' FROM ...\n> windows: \"Failed test 'data_checksums=on is reported on an offline cluster stdout /(?^:^on$)/'\" / WARNING: 01000: algorithm zlib is not supported\n> \n> Note that it's possible and easy to kick off a CI run using any github account:\n> see ./src/tools/ci/README\n> \n> For me, it's faster than running check-world -j4 locally, and runs tests on 4\n> OSes.\n\n\nI’ve resolved the stuck tests and added zlib support for CI Windows builds to patch 0003-*. Thanks\nfor the suggestion, all tests seem to be OK now, except the macOS one. It won't schedule in Cirrus\nCI for some reason, but I guess it happens because of my GitHub account limitation.\n\n--\nThanks,\n\nDaniil Zakhlystov", "msg_date": "Tue, 18 Jan 2022 02:06:32 +0500", "msg_from": "Daniil Zakhlystov <usernamedt@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Tue, Jan 18, 2022 at 02:06:32AM +0500, Daniil Zakhlystov wrote:\n> > => Since March, errmsg doesn't need extra parenthesis around it (e3a87b4).\n\n> I’ve resolved the stuck tests and added zlib support for CI Windows builds to patch 0003-*. Thanks\n> for the suggestion, all tests seem to be OK now, except the macOS one. It won't schedule in Cirrus\n> CI for some reason, but I guess it happens because of my GitHub account limitation.\n\nI don't know about your github account, but it works for cfbot, which is now\ngreen.\n\nThanks for implementing zlib for windows. Did you try this with default\ncompressions set to lz4 and zstd ?\n\nThe thread from 2019 is very long, and starts off with the guidance that\ncompression had been implemented at the wrong layer. It looks like this hasn't\nchanged since then. secure_read/write are passed as function pointers to the\nZPQ interface, which then calls back to them to read and flush its compression\nbuffers. As I understand, the suggestion was to leave the socket reads and\nwrites alone. And then conditionally de/compress buffers after reading /\nbefore writing from the socket if compression was negotiated.\n\nIt's currently done like this\npq_recvbuf() => secure_read() - when compression is disabled \npq_recvbuf() => ZPQ => secure_read() - when compression is enabled \n\nDmitri sent a partial, POC patch which changes the de/compression to happen in\nsecure_read/write, which is changed to call ZPQ: \nhttps://www.postgresql.org/message-id/CA+q6zcUPrssNaRS+FyoBsD-F2stK1Roa-4sAhFOfAjOWLziM4g@mail.gmail.com\npq_recvbuf() => secure_read() => ZPQ\n\nThe same thing is true of the frontend: function pointers to\npqsecure_read/write are being passed to zpq_create, and then the ZPQ interface\ncalled instead of the original functions. Those are the functions which read\nusing SSL, so they should also handle compression.\n\nThat's where SSL is handled, and it seems like the right place to handle\ncompression. Have you evaluated that way to do things ?\n\nKonstantin said he put ZPQ at that layer seems to 1) avoid code duplication\nbetween client/server; and, 2) to allow compression to happen before SSL, to\nallow both (if the admin decides it's okay). But I don't see why compression\ncan't happen before sending to SSL, or after reading from it?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 17 Jan 2022 22:39:20 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "If there's no objection, I'd like to move this to the next CF for consideration\nin PG16.\n\nOn Mon, Jan 17, 2022 at 10:39:19PM -0600, Justin Pryzby wrote:\n> On Tue, Jan 18, 2022 at 02:06:32AM +0500, Daniil Zakhlystov wrote:\n> > > => Since March, errmsg doesn't need extra parenthesis around it (e3a87b4).\n> \n> > I’ve resolved the stuck tests and added zlib support for CI Windows builds to patch 0003-*. Thanks\n> > for the suggestion, all tests seem to be OK now, except the macOS one. It won't schedule in Cirrus\n> > CI for some reason, but I guess it happens because of my GitHub account limitation.\n> \n> I don't know about your github account, but it works for cfbot, which is now\n> green.\n> \n> Thanks for implementing zlib for windows. Did you try this with default\n> compressions set to lz4 and zstd ?\n> \n> The thread from 2019 is very long, and starts off with the guidance that\n> compression had been implemented at the wrong layer. It looks like this hasn't\n> changed since then. secure_read/write are passed as function pointers to the\n> ZPQ interface, which then calls back to them to read and flush its compression\n> buffers. As I understand, the suggestion was to leave the socket reads and\n> writes alone. And then conditionally de/compress buffers after reading /\n> before writing from the socket if compression was negotiated.\n> \n> It's currently done like this\n> pq_recvbuf() => secure_read() - when compression is disabled \n> pq_recvbuf() => ZPQ => secure_read() - when compression is enabled \n> \n> Dmitri sent a partial, POC patch which changes the de/compression to happen in\n> secure_read/write, which is changed to call ZPQ: \n> https://www.postgresql.org/message-id/CA+q6zcUPrssNaRS+FyoBsD-F2stK1Roa-4sAhFOfAjOWLziM4g@mail.gmail.com\n> pq_recvbuf() => secure_read() => ZPQ\n> \n> The same thing is true of the frontend: function pointers to\n> pqsecure_read/write are being passed to zpq_create, and then the ZPQ interface\n> called instead of the original functions. Those are the functions which read\n> using SSL, so they should also handle compression.\n> \n> That's where SSL is handled, and it seems like the right place to handle\n> compression. Have you evaluated that way to do things ?\n> \n> Konstantin said he put ZPQ at that layer seems to 1) avoid code duplication\n> between client/server; and, 2) to allow compression to happen before SSL, to\n> allow both (if the admin decides it's okay). But I don't see why compression\n> can't happen before sending to SSL, or after reading from it?\n\n\n", "msg_date": "Wed, 2 Mar 2022 15:33:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Ok, thanks\n\n\n> On 3 Mar 2022, at 02:33, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> If there's no objection, I'd like to move this to the next CF for consideration\n> in PG16.\n> \n> On Mon, Jan 17, 2022 at 10:39:19PM -0600, Justin Pryzby wrote:\n>> On Tue, Jan 18, 2022 at 02:06:32AM +0500, Daniil Zakhlystov wrote:\n>>>> => Since March, errmsg doesn't need extra parenthesis around it (e3a87b4).\n>> \n>>> I’ve resolved the stuck tests and added zlib support for CI Windows builds to patch 0003-*. Thanks\n>>> for the suggestion, all tests seem to be OK now, except the macOS one. It won't schedule in Cirrus\n>>> CI for some reason, but I guess it happens because of my GitHub account limitation.\n>> \n>> I don't know about your github account, but it works for cfbot, which is now\n>> green.\n>> \n>> Thanks for implementing zlib for windows. Did you try this with default\n>> compressions set to lz4 and zstd ?\n>> \n>> The thread from 2019 is very long, and starts off with the guidance that\n>> compression had been implemented at the wrong layer. It looks like this hasn't\n>> changed since then. secure_read/write are passed as function pointers to the\n>> ZPQ interface, which then calls back to them to read and flush its compression\n>> buffers. As I understand, the suggestion was to leave the socket reads and\n>> writes alone. And then conditionally de/compress buffers after reading /\n>> before writing from the socket if compression was negotiated.\n>> \n>> It's currently done like this\n>> pq_recvbuf() => secure_read() - when compression is disabled \n>> pq_recvbuf() => ZPQ => secure_read() - when compression is enabled \n>> \n>> Dmitri sent a partial, POC patch which changes the de/compression to happen in\n>> secure_read/write, which is changed to call ZPQ: \n>> https://www.postgresql.org/message-id/CA+q6zcUPrssNaRS+FyoBsD-F2stK1Roa-4sAhFOfAjOWLziM4g@mail.gmail.com\n>> pq_recvbuf() => secure_read() => ZPQ\n>> \n>> The same thing is true of the frontend: function pointers to\n>> pqsecure_read/write are being passed to zpq_create, and then the ZPQ interface\n>> called instead of the original functions. Those are the functions which read\n>> using SSL, so they should also handle compression.\n>> \n>> That's where SSL is handled, and it seems like the right place to handle\n>> compression. Have you evaluated that way to do things ?\n>> \n>> Konstantin said he put ZPQ at that layer seems to 1) avoid code duplication\n>> between client/server; and, 2) to allow compression to happen before SSL, to\n>> allow both (if the admin decides it's okay). But I don't see why compression\n>> can't happen before sending to SSL, or after reading from it?\n\n\n\n", "msg_date": "Thu, 3 Mar 2022 13:50:21 +0500", "msg_from": "Daniil Zakhlystov <usernamedt@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3499/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 13:53:42 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Tue, Aug 2, 2022 at 1:53 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> and changing the status to \"Needs Review\"\n\nI've tried the patch, it works as advertised. The code generally is\nOK, maybe some functions require comments (because at least their\nneighbours have some).\nSome linkers complain about zs_is_valid_impl_id() being inline while\nused in other modules.\n\nSome architectural notes:\n1. Currently when the user requests compression from libpq, but the\nserver does not support any of the codecs client have - the connection\nwill be rejected by client. I think this should be configured akin to\nSSL: on, try, off.\n2. On the zpq_stream level of abstraction we parse a stream of bytes\nto look for individual message headers. I think this is OK, just a\nlittle bit awkward.\n3. CompressionAck message can be completely replaced by ParameterStatus message.\n4. Instead of sending a separate SetCompressionMethod, the\nCompressedData can have its header with the index of the used method\nand the necessity to restart compression context.\n5. Portions of pg_stat_network_traffic can be extracted to separate\npatch step to ease the review. And, actually, the scope of this view\nis slightly beyond compression anyway.\n\nWhat do you think?\n\nAlso, compression is a very cool and awaited feature, hope to see it\ncommitted one day, thank you for working on this!\n\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Sat, 12 Nov 2022 13:47:45 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Sat, Nov 12, 2022 at 1:47 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> I've tried the patch, it works as advertised.\n\nWhile testing patch some more I observe unpleasant segfaults:\n\n#26 0x00007fecafa1e058 in __memcpy_ssse3_back () from target:/lib64/libc.so.6\n#27 0x000000000b08fda2 in lz4_decompress (d_stream=0x18cf82a0,\nsrc=0x7feae4fa505d, src_size=92,\n src_processed=0x7ffff9f4fdf8, dst=0x18b01f80, dst_size=8192,\ndst_processed=0x7ffff9f4fe60)\n#28 0x000000000b090624 in zs_read (zs=0x18cdfbf0, src=0x7feae4fa505d,\nsrc_size=92, src_processed=0x7ffff9f4fdf8,\n dst=0x18b01f80, dst_size=8192, dst_processed=0x7ffff9f4fe60)\n#29 0x000000000b08eb8f in zpq_read_compressed_message\n(zpq=0x7feae4fa5010, dst=0x18b01f80 \"Q\", dst_len=8192,\n dst_processed=0x7ffff9f4fe60)\n#30 0x000000000b08f1a9 in zpq_read (zpq=0x7feae4fa5010,\ndst=0x18b01f80, dst_size=8192, noblock=false)\n\n(gdb) select-frame 27\n(gdb) info locals\nds = 0x18cf82a0\ndecPtr = 0x18cf8aec \"\"\ndecBytes = -87\n\nThis is the buffer overrun by decompression. I think the receive\nbuffer must be twice bigger than the send buffer to accommodate such\nmessages.\nAlso this portion of lz4_decompress()\n Assert(decBytes > 0);\nmust actually be a real check and elog(ERROR,). Because clients can\nintentionally compose CompressedData to blow up a server.\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Sat, 12 Nov 2022 20:04:46 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Sat, Nov 12, 2022 at 8:04 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> While testing patch some more I observe unpleasant segfaults:\n>\n> #27 0x000000000b08fda2 in lz4_decompress (d_stream=0x18cf82a0,\n> src=0x7feae4fa505d, src_size=92,\n> (gdb) select-frame 27\n> (gdb) info locals\n> ds = 0x18cf82a0\n> decPtr = 0x18cf8aec \"\"\n> decBytes = -87\n>\n\nI've debugged the stuff and the problem resulted to be in wrong\nmessage limits for Lz4 compression\n+#define MESSAGE_MAX_BYTES 819200\ninstead of\n+#define MESSAGE_MAX_BYTES 8192\n\nOther codecs can utilize continuation of the decompression stream\nusing ZS_DATA_PENDING, while Lz4 cannot do this. I was going to\nproduce quickfix for all my lz4 findings, but it occured to me that a\npatchset needs a heavy rebase. If no one shows up to fix it I'll do\nthat in one of the coming weekends. Either way here's a reproducer for\nthe coredump:\npsql 'compression=lz4' -f boom.sql\n\nThanks!\n\nBest regards, Andrey Borodin.", "msg_date": "Mon, 14 Nov 2022 19:44:24 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Mon, Nov 14, 2022 at 07:44:24PM -0800, Andrey Borodin wrote:\n> patchset needs a heavy rebase. If no one shows up to fix it I'll do\n\nDespite what its git timestamp says, this is based on the most recent\npatch from January, which I've had floating around since then. It\nneeded to be rebased over at least:\n\n - guc_tables patch;\n - build and test with meson;\n - doc/\n\nSome of my changes are separate so you can see what I've done.\ncheck_libpq_compression() is in the wrong place, but I couldn't\nimmediately see where else to put it, since src/common can't include the\nbackend's guc headers.\n\nSome of the makefile changes seem unnecessary (now?), and my meson\nchanges don't seem quite right, either.\n\nThere's no reason for Zstd to be a separate patch anymore.\n\nIt should be updated to parse compression level and options using the\ninfrastructure introduced for basebackup.\n\nAnd address the architectural issue from 2 years ago:\nhttps://www.postgresql.org/message-id/20220118043919.GA23027%40telsasoft.com\n\nThe global variable PqStream should be moved into some libpq structure\n(Port?) and handled within secure_read(). And pqsecure_read shouldn't\nbe passed as a function pointer/callback. And verify it passes tests\nwith all supported compression algorithms and connects to old servers.\n\n-- \nJustin", "msg_date": "Tue, 15 Nov 2022 21:17:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Tue, Nov 15, 2022 at 7:17 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n\nAlso I've found one more TODO item for the patch. Currently in\nfe-connect.c patch is doing buffer reset:\nconn->inStart = conn->inCursor = conn->inEnd = 0;\nThis effectively consumes butes up tu current cursor. However, packet\ninspection is continued. The patch works because in most cases\nfollowing code will issue re-read of message. Coincidentally.\n/* Get the type of request. */\nif (pqGetInt((int *) &areq, 4, conn))\n{\n/* We'll come back when there are more data */\nreturn PGRES_POLLING_READING;\n}\n\nBut I think we need a proper\ngoto keep_going;\nto start from the beginning of the message.\n\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:27:06 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On 17.11.22 01:27, Andrey Borodin wrote:\n> Also I've found one more TODO item for the patch. Currently in\n> fe-connect.c patch is doing buffer reset:\n> conn->inStart = conn->inCursor = conn->inEnd = 0;\n> This effectively consumes butes up tu current cursor. However, packet\n> inspection is continued. The patch works because in most cases\n> following code will issue re-read of message. Coincidentally.\n> /* Get the type of request. */\n> if (pqGetInt((int *) &areq, 4, conn))\n> {\n> /* We'll come back when there are more data */\n> return PGRES_POLLING_READING;\n> }\n\nNote that the above code was just changed in dce92e59b1. I don't know \nhow that affects this patch set.\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 16:09:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 7:09 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Note that the above code was just changed in dce92e59b1.\nThanks!\n\n> I don't know\n> how that affects this patch set.\nWith dce92e59b1 it would be much easier to find a bug in the compression patch.\n\nSome more notes about the patch. (sorry for posting review notes in so\nmany different messages)\n\n1. zs_is_valid_impl_id(unsigned int id)\n{\nreturn id >= 0 && id < lengthof(zs_algorithms);\n}\n\nid is unsigned, no need to check it's non-negative.\n\n2. This literal\n{no_compression_name}\nshould be replaced by explicit form\n{no_compression_name, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL}\n\n3. Comments like \"Return true if should, false if should not.\" are useless.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:07:34 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "On 18.11.22 02:07, Andrey Borodin wrote:\n> 2. This literal\n> {no_compression_name}\n> should be replaced by explicit form\n> {no_compression_name, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL}\n\nThat doesn't seem better.\n\n\n", "msg_date": "Fri, 18 Nov 2022 17:31:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "Pinging to see if anyone has continued to work on this behind-the-scenes or\nwhether this is the latest patch set there is.\n\n-- \nJonah H. Harris\n\nPinging to see if anyone has continued to work on this behind-the-scenes or whether this is the latest patch set there is.-- Jonah H. Harris", "msg_date": "Thu, 10 Aug 2023 10:47:31 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" }, { "msg_contents": "\n\n> On 10 Aug 2023, at 19:47, Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> \n> Pinging to see if anyone has continued to work on this behind-the-scenes or whether this is the latest patch set there is.\n\nIt's still on my TODO list, but I haven't done much review cycles yet. And the patch series already needs heavy rebase.\n\nThank for your interest in the topic.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 11 Aug 2023 16:31:09 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: libpq compression (part 2)" } ]
[ { "msg_contents": "Hi,\n\nCurrently the server is erroring out when unable to remove/parse a\nlogical rewrite file in CheckPointLogicalRewriteHeap wasting the\namount of work the checkpoint has done and preventing the checkpoint\nfrom finishing. This is unlike CheckPointSnapBuild does for snapshot\nfiles i.e. it just emits a message at LOG level and continues if it is\nunable to parse or remove the file. Attaching a small patch applying\nthe same idea to the mapping files.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 31 Dec 2021 18:12:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On 12/31/21, 4:44 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Currently the server is erroring out when unable to remove/parse a\r\n> logical rewrite file in CheckPointLogicalRewriteHeap wasting the\r\n> amount of work the checkpoint has done and preventing the checkpoint\r\n> from finishing. This is unlike CheckPointSnapBuild does for snapshot\r\n> files i.e. it just emits a message at LOG level and continues if it is\r\n> unable to parse or remove the file. Attaching a small patch applying\r\n> the same idea to the mapping files.\r\n\r\nThis seems reasonable to me. AFAICT moving on to other files after an\r\nerror shouldn't cause any problems. In fact, it's probably beneficial\r\nto try to clean up as much as possible so that the files do not\r\ncontinue to build up.\r\n\r\nThe only feedback I have for the patch is that I don't think the new\r\ncomments are necessary.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 22:17:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files\n to save checkpoint work" }, { "msg_contents": "On Thu, Jan 13, 2022 at 3:47 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/31/21, 4:44 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Currently the server is erroring out when unable to remove/parse a\n> > logical rewrite file in CheckPointLogicalRewriteHeap wasting the\n> > amount of work the checkpoint has done and preventing the checkpoint\n> > from finishing. This is unlike CheckPointSnapBuild does for snapshot\n> > files i.e. it just emits a message at LOG level and continues if it is\n> > unable to parse or remove the file. Attaching a small patch applying\n> > the same idea to the mapping files.\n>\n> This seems reasonable to me. AFAICT moving on to other files after an\n> error shouldn't cause any problems. In fact, it's probably beneficial\n> to try to clean up as much as possible so that the files do not\n> continue to build up.\n\nThanks for the review Nathan!\n\n> The only feedback I have for the patch is that I don't think the new\n> comments are necessary.\n\nI borrowed the comments as-is from the CheckPointSnapBuild introduced\nby the commit b89e15105. IMO, let the comments be there as they\nexplain why we are not emitting ERRORs, however I will leave it to the\ncommitter to decide on that.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 13 Jan 2022 11:32:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On 1/12/22, 10:03 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Thu, Jan 13, 2022 at 3:47 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> The only feedback I have for the patch is that I don't think the new\r\n>> comments are necessary.\r\n>\r\n> I borrowed the comments as-is from the CheckPointSnapBuild introduced\r\n> by the commit b89e15105. IMO, let the comments be there as they\r\n> explain why we are not emitting ERRORs, however I will leave it to the\r\n> committer to decide on that.\r\n\r\nHuh, somehow I missed that when I looked at it yesterday. I'm going\r\nto bump this one to ready-for-committer, then.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 13 Jan 2022 19:24:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files\n to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2021-12-31 18:12:37 +0530, Bharath Rupireddy wrote:\n> Currently the server is erroring out when unable to remove/parse a\n> logical rewrite file in CheckPointLogicalRewriteHeap wasting the\n> amount of work the checkpoint has done and preventing the checkpoint\n> from finishing.\n\nThis seems like it'd make failures to remove the files practically\ninvisible. Which'd have it's own set of problems?\n\nWhat motivated proposing this change?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Jan 2022 11:38:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Fri, Jan 14, 2022 at 1:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-31 18:12:37 +0530, Bharath Rupireddy wrote:\n> > Currently the server is erroring out when unable to remove/parse a\n> > logical rewrite file in CheckPointLogicalRewriteHeap wasting the\n> > amount of work the checkpoint has done and preventing the checkpoint\n> > from finishing.\n>\n> This seems like it'd make failures to remove the files practically\n> invisible. Which'd have it's own set of problems?\n>\n> What motivated proposing this change?\n\nWe had an issue where there were many mapping files generated during\nthe crash recovery and end-of-recovery checkpoint was taking a lot of\ntime. We had to manually intervene and delete some of the mapping\nfiles (although it may not sound sensible) to make end-of-recovery\ncheckpoint faster. Because of the race condition between manual\ndeletion and checkpoint deletion, the unlink error occurred which\ncrashed the server and the server entered the recovery again wasting\nthe entire earlier recovery work.\n\nIn summary, with the changes (emitting LOG-only messages for unlink\nfailures and continuing with the other files) proposed for\nCheckPointLogicalRewriteHeap in this thread and the existing code in\nCheckPointSnapBuild, I'm sure it will help not waste the recovery\nthat's has been done in case unlink fails for any reasons.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 15 Jan 2022 14:04:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 15, 2022 at 02:04:12PM +0530, Bharath Rupireddy wrote:\n> \n> We had an issue where there were many mapping files generated during\n> the crash recovery and end-of-recovery checkpoint was taking a lot of\n> time. We had to manually intervene and delete some of the mapping\n> files (although it may not sound sensible) to make end-of-recovery\n> checkpoint faster. Because of the race condition between manual\n> deletion and checkpoint deletion, the unlink error occurred which\n> crashed the server and the server entered the recovery again wasting\n> the entire earlier recovery work.\n\nMaybe I'm missing something but wouldn't\nhttps://commitfest.postgresql.org/36/3448/ better solve the problem?\n\n\n", "msg_date": "Sat, 15 Jan 2022 17:29:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Sat, Jan 15, 2022 at 2:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Sat, Jan 15, 2022 at 02:04:12PM +0530, Bharath Rupireddy wrote:\n> >\n> > We had an issue where there were many mapping files generated during\n> > the crash recovery and end-of-recovery checkpoint was taking a lot of\n> > time. We had to manually intervene and delete some of the mapping\n> > files (although it may not sound sensible) to make end-of-recovery\n> > checkpoint faster. Because of the race condition between manual\n> > deletion and checkpoint deletion, the unlink error occurred which\n> > crashed the server and the server entered the recovery again wasting\n> > the entire earlier recovery work.\n>\n> Maybe I'm missing something but wouldn't\n> https://commitfest.postgresql.org/36/3448/ better solve the problem?\n\nThe error can cause the new background process proposed there in that\nthread to restart, which is again costly. Since we have LOG-only and\ncontinue behavior in CheckPointSnapBuild already, having the same\nbehavior for CheckPointLogicalRewriteHeap helps a lot.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 15 Jan 2022 19:58:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Sat, Jan 15, 2022 at 2:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Maybe I'm missing something but wouldn't\n>> https://commitfest.postgresql.org/36/3448/ better solve the problem?\n\n> The error can cause the new background process proposed there in that\n> thread to restart, which is again costly. Since we have LOG-only and\n> continue behavior in CheckPointSnapBuild already, having the same\n> behavior for CheckPointLogicalRewriteHeap helps a lot.\n\n[ stares at CheckPointLogicalRewriteHeap for awhile ... ]\n\nThis code has got more problems than that. It took me awhile to\nabsorb it, but we don't actually care about the contents of any of\nthose files; all of the information is encoded in the file *names*.\n(This strikes me as probably not a very efficient design, compared\nto putting the same data into a single text file; but for now I'll\nassume we're not up for a complete rewrite.) That being the case,\nI wonder what it is we expect fsync'ing the surviving files to do\nexactly. We should be fsync'ing the directory not the files\nthemselves, no?\n\nOther things that seem poorly thought out:\n\n* Why is the check for \"map-\" prefix after, rather than before,\nthe lstat?\n\n* Why is it okay to ignore lstat failure? Seems like we might\nas well not even have the lstat.\n\n* The sscanf on the file name would not notice trailing junk,\nsuch as an editor backup marker. Is that okay?\n\nAs far as the patch itself goes, I agree that failure to unlink\nis noncritical, because such a file would have no further effect\nand we can just ignore it. I think I also agree that failure\nof the sscanf is noncritical, because the implication of that\nis that the file name doesn't conform to our expectations, which\nmeans it's basically just like the check that causes us to\nignore names not starting with \"map-\". (Actually, isn't the\nseparate check for \"map-\" useless, given that sscanf will make\nthe equivalent check?)\n\nI started out wondering why the patch didn't also change the loop's\nother ERROR conditions to LOG. But we do want to ERROR if we're\nunable to sync transient state down to disk, and that is what\nthe other steps (think they) are doing. It might be worth a\ncomment to point that out though, before someone decides that\nif these errors are just LOG level then the others can be too.\n\nAnyway, I think possibly we can drop the bottom half of the loop\n(the part trying to fsync non-removed files) in favor of fsync'ing\nthe directory afterwards. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 13:34:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "I wrote:\n> Anyway, I think possibly we can drop the bottom half of the loop\n> (the part trying to fsync non-removed files) in favor of fsync'ing\n> the directory afterwards. Thoughts?\n\nOh, scratch that --- *this* loop doesn't care about the file\ncontents, but other code does. However, don't we need a directory\nfsync too? Or is that handled someplace else?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 13:45:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-01-19 13:34:21 -0500, Tom Lane wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Sat, Jan 15, 2022 at 2:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> Maybe I'm missing something but wouldn't\n> >> https://commitfest.postgresql.org/36/3448/ better solve the problem?\n>\n> > The error can cause the new background process proposed there in that\n> > thread to restart, which is again costly. Since we have LOG-only and\n> > continue behavior in CheckPointSnapBuild already, having the same\n> > behavior for CheckPointLogicalRewriteHeap helps a lot.\n>\n> [ stares at CheckPointLogicalRewriteHeap for awhile ... ]\n>\n> This code has got more problems than that. It took me awhile to\n> absorb it, but we don't actually care about the contents of any of\n> those files; all of the information is encoded in the file *names*.\n\nI'm not following - we *do* need the contents of the files? They're applied\nas-needed in ApplyLogicalMappingFile().\n\n\n> (This strikes me as probably not a very efficient design, compared\n> to putting the same data into a single text file; but for now I'll\n> assume we're not up for a complete rewrite.) That being the case,\n> I wonder what it is we expect fsync'ing the surviving files to do\n> exactly. We should be fsync'ing the directory not the files\n> themselves, no?\n\nFsyncing the directory doesn't guarantee anything about the contents of\nfiles. But, you're right, we need an fsync of the directory too.\n\n\n> Other things that seem poorly thought out:\n>\n> * Why is the check for \"map-\" prefix after, rather than before,\n> the lstat?\n\nIt doesn't seem to matter much - there shouldn't be a meaningful amount of\nother files in there.\n\n\n> * Why is it okay to ignore lstat failure? Seems like we might\n> as well not even have the lstat.\n\nYea, that seems odd, not sure why that ended up this way. I guess the aim\nmight have been to tolerate random files we don't have permissions for or\nsuch?\n\n\n> * The sscanf on the file name would not notice trailing junk,\n> such as an editor backup marker. Is that okay?\n\nI don't really see a problem with it - there shouldn't be other files matching\nthe pattern - but it couldn't hurt to check the pattern matches exhaustively.\n\n\n> As far as the patch itself goes, I agree that failure to unlink\n> is noncritical, because such a file would have no further effect\n> and we can just ignore it.\n\nI don't agree. We iterate through the directory regularly on systems with\ncatalog changes + logical decoding. An ever increasing list of gunk will make\nthat more and more expensive. And I haven't heard a meaningful reason why we\nwould have map-* files that we can't remove.\n\nIgnoring failures like this just makes problems much harder to debug and they\ntend to bite harder for it.\n\n\n> I think I also agree that failure of the sscanf is noncritical, because the\n> implication of that is that the file name doesn't conform to our\n> expectations, which means it's basically just like the check that causes us\n> to ignore names not starting with \"map-\". (Actually, isn't the separate\n> check for \"map-\" useless, given that sscanf will make the equivalent check?)\n\nWell, this way only files starting with \"map-\" are expected to conform to a\nstrict format, the rest is ignored?\n\n\n> Anyway, I think possibly we can drop the bottom half of the loop\n> (the part trying to fsync non-removed files) in favor of fsync'ing\n> the directory afterwards. Thoughts?\n\nI don't think that'd be correct.\n\n\nIn short: We should add a directory fsync, I'm fine with improving the error\nchecking, but the rest seems like a net-negative with no convincing reasoning.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 19 Jan 2022 11:07:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On 1/19/22, 11:08 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2022-01-19 13:34:21 -0500, Tom Lane wrote:\r\n>> As far as the patch itself goes, I agree that failure to unlink\r\n>> is noncritical, because such a file would have no further effect\r\n>> and we can just ignore it.\r\n>\r\n> I don't agree. We iterate through the directory regularly on systems with\r\n> catalog changes + logical decoding. An ever increasing list of gunk will make\r\n> that more and more expensive. And I haven't heard a meaningful reason why we\r\n> would have map-* files that we can't remove.\r\n\r\nI think the other side of this is that we don't want checkpointing to\r\ncontinually fail because of a noncritical failure. That could also\r\nlead to problems down the road.\r\n\r\n> Ignoring failures like this just makes problems much harder to debug and they\r\n> tend to bite harder for it.\r\n\r\nIf such noncritical failures happened regularly, the server logs will\r\nlikely become filled with messages about it. Perhaps users may not\r\nnotice for a while, but I don't think the proposed patch would make\r\ndebugging excessively difficult.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 19 Jan 2022 22:48:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files\n to save checkpoint work" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I think the other side of this is that we don't want checkpointing to\n> continually fail because of a noncritical failure. That could also\n> lead to problems down the road.\n\nYeah, a persistent failure to complete checkpoints is very nasty.\nYour disk will soon fill with unrecyclable WAL. I don't see how\nthat's better than a somewhat hypothetical performance issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:53:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "I took the liberty of adjusting Bharath's patch based on the latest\r\nfeedback.\r\n\r\nOn 1/19/22, 10:35 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> We should be fsync'ing the directory not the files\r\n> themselves, no?\r\n\r\nI added a directory sync at the end of CheckPointLogicalRewriteHeap(),\r\nwhich IIUC is enough.\r\n\r\n> * Why is the check for \"map-\" prefix after, rather than before,\r\n> the lstat?\r\n\r\nI swapped these checks. I stopped short of moving the sscanf() before\r\nthe lstat(), though, as 1) I don't think it will help very much and 2)\r\nit seemed weird to start emitting \"could not parse filename\" logs for\r\nnon-regular files we presently skip silently.\r\n\r\n> * Why is it okay to ignore lstat failure? Seems like we might\r\n> as well not even have the lstat.\r\n\r\nI added error checking for lstat().\r\n\r\n> * The sscanf on the file name would not notice trailing junk,\r\n> such as an editor backup marker. Is that okay?\r\n\r\nI think this is okay. The absolute worst that would happen would be\r\nthat the extra file would be deleted. This might eventually become a\r\nproblem if files with the same prefix format were created by the\r\nserver. However, CheckPointSnapBuild() already has this problem with\r\ntemporary files, and it claims not to need any extra handling:\r\n\r\n * temporary filenames from SnapBuildSerialize() include the LSN and\r\n * everything but are postfixed by .$pid.tmp. We can just remove them\r\n * the same as other files because there can be none that are\r\n * currently being written that are older than cutoff.\r\n\r\n> (Actually, isn't the\r\n> separate check for \"map-\" useless, given that sscanf will make\r\n> the equivalent check?)\r\n\r\nThe only benefit I see from the extra \"map-\" check is that it'll avoid\r\n\"could not parse filename\" logs for files that clearly aren't related\r\nto the task at hand. I don't know if this is expected during normal\r\noperation at the moment. I've left the \"map-\" check for now.\r\n\r\n> I started out wondering why the patch didn't also change the loop's\r\n> other ERROR conditions to LOG. But we do want to ERROR if we're\r\n> unable to sync transient state down to disk, and that is what\r\n> the other steps (think they) are doing. It might be worth a\r\n> comment to point that out though, before someone decides that\r\n> if these errors are just LOG level then the others can be too.\r\n\r\nI added such a comment.\r\n\r\nI also updated CheckPointSnapBuild() and UpdateLogicalMappings() with\r\nsimilar adjustments.\r\n\r\nNathan", "msg_date": "Thu, 20 Jan 2022 19:15:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files\n to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-01-20 19:15:08 +0000, Bossart, Nathan wrote:\n> > * Why is it okay to ignore lstat failure? Seems like we might\n> > as well not even have the lstat.\n> \n> I added error checking for lstat().\n\nIt seems odd to change a bunch of things to not be errors anymore, but then\nadd new sources of errors. If we try to deal with concurrent deletions or\npermission issues - otherwise what's the point of making unlink() not an error\nanymore - why do we expect to be able to lstat()?\n\n\n> I also updated CheckPointSnapBuild() and UpdateLogicalMappings() with\n> similar adjustments.\n\nFWIW, I still think the ERROR->LOG changes are bad idea. The whole thing of\n\"oh, let's just ignore stuff that we don't expect and soldier on\" has bitten\nus over and over again. It makes us less robust, not more robust.\n\nIt's also just about impossible to monitor for problems that emit LOG.\n\n\nI'd be more on board accepting some selective errors. E.g. not erroring on\nENOENT, but continuing to error on others (most likely ENOACCESS). I think we\nshould *not* change the\n\n\n> +\t\t/*\n> +\t\t * We just log a message if a file doesn't fit the pattern, it's\n> +\t\t * probably some editor's lock/state file or similar...\n> +\t\t */\n\nAn editor's lock file that starts with map- would presumably be the whole\nfilename plus an additional file-ending. But this check won't catch those.\n\n\n> +\t\t\t * Unlike failures to unlink() old logical rewrite files, we must\n> +\t\t\t * ERROR if we're unable to sync transient state down to disk. This\n> +\t\t\t * allows replay to assume that everything written out before\n> +\t\t\t * checkpoint start is persisted.\n> \t\t\t */\n\nHm, not quite happy with the second bit. Checkpointed state being durable\nisn't about replay directly, it's about the basic property of a checkpoint\nbeing fulfilled?\n\n\n> \t\t\tpgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_CHECKPOINT_SYNC);\n> \t\t\tif (pg_fsync(fd) != 0)\n> @@ -1282,4 +1305,7 @@ CheckPointLogicalRewriteHeap(void)\n> \t\t}\n> \t}\n> \tFreeDir(mappings_dir);\n> +\n> +\t/* persist directory entries to disk */\n> +\tfsync_fname(\"pg_logical/mappings\", true);\n> }\n\nThis is probably worth backpatching, if you split it out I'll do so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 20 Jan 2022 11:46:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Thanks for your feedback.\r\n\r\nOn 1/20/22, 11:47 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> It seems odd to change a bunch of things to not be errors anymore, but then\r\n> add new sources of errors. If we try to deal with concurrent deletions or\r\n> permission issues - otherwise what's the point of making unlink() not an error\r\n> anymore - why do we expect to be able to lstat()?\r\n\r\nMy reasoning for making lstat() an ERROR was that there's a chance we\r\nneed to fsync() the file, and if we can't fsync() a file for whatever\r\nreason, we definitely want to ERROR. I suppose we could conditionally\r\nERROR based on the file name, but I don't know if that's really worth\r\nthe complexity.\r\n\r\n> I'd be more on board accepting some selective errors. E.g. not erroring on\r\n> ENOENT, but continuing to error on others (most likely ENOACCESS). I think we\r\n> should *not* change the\r\n\r\nI think this approach would work for the use-case Bharath mentioned\r\nupthread. In any case, if deleting a file fails because the file was\r\nalready deleted, there's no point in ERROR-ing. I think filtering\r\nerrors is a bit trickier for lstat(). If we would've fsync'd the file\r\nbut lstat() gives us ENOENT, we may have a problem. (However, there's\r\nalso a good chance we wouldn't notice such problems if the race didn't\r\noccur.) I'll play around with it.\r\n\r\n>> + /*\r\n>> + * We just log a message if a file doesn't fit the pattern, it's\r\n>> + * probably some editor's lock/state file or similar...\r\n>> + */\r\n>\r\n> An editor's lock file that starts with map- would presumably be the whole\r\n> filename plus an additional file-ending. But this check won't catch those.\r\n\r\nRight, it will either fsync() or unlink() those. I'll work on the\r\ncomment. Or do you think it's worth validating that there are no\r\ntrailing characters? I looked into that a bit earlier, and the code\r\nfelt excessive to me, but I don't have a strong opinion here.\r\n\r\n>> + * Unlike failures to unlink() old logical rewrite files, we must\r\n>> + * ERROR if we're unable to sync transient state down to disk. This\r\n>> + * allows replay to assume that everything written out before\r\n>> + * checkpoint start is persisted.\r\n>> */\r\n>\r\n> Hm, not quite happy with the second bit. Checkpointed state being durable\r\n> isn't about replay directly, it's about the basic property of a checkpoint\r\n> being fulfilled?\r\n\r\nI'll work on this. FWIW I modeled this based on the comment above the\r\nfunction. Do you think that should be adjusted as well?\r\n\r\n>> + /* persist directory entries to disk */\r\n>> + fsync_fname(\"pg_logical/mappings\", true);\r\n>\r\n> This is probably worth backpatching, if you split it out I'll do so.\r\n\r\nSure thing, will do shortly.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 20 Jan 2022 20:23:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files\n to save checkpoint work" }, { "msg_contents": "On 1/20/22, 12:24 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 1/20/22, 11:47 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n>>> + /* persist directory entries to disk */\r\n>>> + fsync_fname(\"pg_logical/mappings\", true);\r\n>>\r\n>> This is probably worth backpatching, if you split it out I'll do so.\r\n>\r\n> Sure thing, will do shortly.\r\n\r\nHere's this part.\r\n\r\nNathan", "msg_date": "Thu, 20 Jan 2022 20:41:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On 2022-01-20 20:41:16 +0000, Bossart, Nathan wrote:\n> Here's this part.\n\nAnd pushed to all branches. Thanks.\n\n\n", "msg_date": "Fri, 21 Jan 2022 11:49:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Fri, Jan 21, 2022 at 11:49:56AM -0800, Andres Freund wrote:\n> On 2022-01-20 20:41:16 +0000, Bossart, Nathan wrote:\n>> Here's this part.\n> \n> And pushed to all branches. Thanks.\n\nThanks!\n\nI spent some time thinking about the right way to proceed here, and I came\nup with the attached patches. The first patch just adds error checking for\nvarious lstat() calls in the replication code. If lstat() fails, then it\nprobably doesn't make sense to try to continue processing the file.\n\nThe second patch changes some nearby calls to ereport() to ERROR. If these\nfailures are truly unexpected, and we don't intend to support use-cases\nlike concurrent manual deletion, then failing might be the right way to go.\nI think it's a shame that such failures could cause checkpointing to\ncontinually fail, but that topic is already being discussed elsewhere [0].\n\n[0] https://postgr.es/m/C1EE64B0-D4DB-40F3-98C8-0CED324D34CB%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/", "msg_date": "Wed, 26 Jan 2022 17:01:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Thu, Jan 27, 2022 at 6:31 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> I spent some time thinking about the right way to proceed here, and I came\n> up with the attached patches. The first patch just adds error checking for\n> various lstat() calls in the replication code. If lstat() fails, then it\n> probably doesn't make sense to try to continue processing the file.\n>\n> The second patch changes some nearby calls to ereport() to ERROR. If these\n> failures are truly unexpected, and we don't intend to support use-cases\n> like concurrent manual deletion, then failing might be the right way to go.\n> I think it's a shame that such failures could cause checkpointing to\n> continually fail, but that topic is already being discussed elsewhere [0].\n>\n> [0] https://postgr.es/m/C1EE64B0-D4DB-40F3-98C8-0CED324D34CB%40amazon.com\n\nAfter an off-list discussion with Andreas, proposing here a patch that\nbasically replaces ReadDir call with ReadDirExtended and gets rid of\nlstat entirely. With this chance, the checkpoint will only care about\nthe snapshot and mapping files and not fail if it finds other files in\nthe directories. Removing lstat enables us to make things faster as we\navoid a bunch of extra system calls - one lstat call per each mapping\nor snapshot file.\n\nThis patch also includes, converting \"could not parse filename\" and\n\"could not remove file\" errors to LOG messages in\nCheckPointLogicalRewriteHeap. This will enable checkpoint not to waste\nthe amount of work that it has done in case it is unable to\nparse/remove the file, for whatever reasons.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 31 Jan 2022 10:42:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Mon, Jan 31, 2022 at 10:42:54AM +0530, Bharath Rupireddy wrote:\n> After an off-list discussion with Andreas, proposing here a patch that\n> basically replaces ReadDir call with ReadDirExtended and gets rid of\n> lstat entirely. With this chance, the checkpoint will only care about\n> the snapshot and mapping files and not fail if it finds other files in\n> the directories. Removing lstat enables us to make things faster as we\n> avoid a bunch of extra system calls - one lstat call per each mapping\n> or snapshot file.\n\nI think removing the lstat() is probably reasonable. We currently aren't\ndoing proper error checking, and the chances of a non-regular file matching\nthe prefix are likely pretty low. In the worst case, we'll LOG or ERROR\nwhen unlinking or fsyncing fails.\n\nHowever, I'm not sure about the change to ReadDirExtended(). That might be\nokay for CheckPointSnapBuild(), which is just trying to remove old files,\nbut CheckPointLogicalRewriteHeap() is responsible for ensuring that files\nare flushed to disk for the checkpoint. If we stop reading the directory\nafter an error and let the checkpoint continue, isn't it possible that some\nmappings files won't be persisted to disk?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Feb 2022 15:55:38 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Feb 2, 2022 at 5:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 10:42:54AM +0530, Bharath Rupireddy wrote:\n> > After an off-list discussion with Andreas, proposing here a patch that\n> > basically replaces ReadDir call with ReadDirExtended and gets rid of\n> > lstat entirely. With this chance, the checkpoint will only care about\n> > the snapshot and mapping files and not fail if it finds other files in\n> > the directories. Removing lstat enables us to make things faster as we\n> > avoid a bunch of extra system calls - one lstat call per each mapping\n> > or snapshot file.\n>\n> I think removing the lstat() is probably reasonable. We currently aren't\n> doing proper error checking, and the chances of a non-regular file matching\n> the prefix are likely pretty low. In the worst case, we'll LOG or ERROR\n> when unlinking or fsyncing fails.\n>\n> However, I'm not sure about the change to ReadDirExtended(). That might be\n> okay for CheckPointSnapBuild(), which is just trying to remove old files,\n> but CheckPointLogicalRewriteHeap() is responsible for ensuring that files\n> are flushed to disk for the checkpoint. If we stop reading the directory\n> after an error and let the checkpoint continue, isn't it possible that some\n> mappings files won't be persisted to disk?\n\nUnless I mis-read your above statement, with LOG level in\nReadDirExtended, I don't think we stop reading the files in\nCheckPointLogicalRewriteHeap. Am I missing something here?\n\nSince, we also continue in CheckPointLogicalRewriteHeap if we can't\nparse/delete some files with the change of \"could not parse\nfilename\"/\"could not remove file\" messages to LOG level\n\nI'm attaching v6, just changed elog(LOG, to ereport(LOG in\nCheckPointLogicalRewriteHeap, other things remain the same.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 2 Feb 2022 17:19:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Wed, Feb 02, 2022 at 05:19:26PM +0530, Bharath Rupireddy wrote:\n> On Wed, Feb 2, 2022 at 5:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> However, I'm not sure about the change to ReadDirExtended(). That might be\n>> okay for CheckPointSnapBuild(), which is just trying to remove old files,\n>> but CheckPointLogicalRewriteHeap() is responsible for ensuring that files\n>> are flushed to disk for the checkpoint. If we stop reading the directory\n>> after an error and let the checkpoint continue, isn't it possible that some\n>> mappings files won't be persisted to disk?\n> \n> Unless I mis-read your above statement, with LOG level in\n> ReadDirExtended, I don't think we stop reading the files in\n> CheckPointLogicalRewriteHeap. Am I missing something here?\n\nReadDirExtended() has the following comment:\n\n * If elevel < ERROR, returns NULL after any error. With the normal coding\n * pattern, this will result in falling out of the loop immediately as\n * though the directory contained no (more) entries.\n\nIf there is a problem reading the directory, we will LOG and then exit the\nloop. If we didn't scan through all the entries in the directory, there is\na chance that we didn't fsync() all the files that need it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 10:37:38 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Thu, Feb 3, 2022 at 12:07 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Feb 02, 2022 at 05:19:26PM +0530, Bharath Rupireddy wrote:\n> > On Wed, Feb 2, 2022 at 5:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> However, I'm not sure about the change to ReadDirExtended(). That might be\n> >> okay for CheckPointSnapBuild(), which is just trying to remove old files,\n> >> but CheckPointLogicalRewriteHeap() is responsible for ensuring that files\n> >> are flushed to disk for the checkpoint. If we stop reading the directory\n> >> after an error and let the checkpoint continue, isn't it possible that some\n> >> mappings files won't be persisted to disk?\n> >\n> > Unless I mis-read your above statement, with LOG level in\n> > ReadDirExtended, I don't think we stop reading the files in\n> > CheckPointLogicalRewriteHeap. Am I missing something here?\n>\n> ReadDirExtended() has the following comment:\n>\n> * If elevel < ERROR, returns NULL after any error. With the normal coding\n> * pattern, this will result in falling out of the loop immediately as\n> * though the directory contained no (more) entries.\n>\n> If there is a problem reading the directory, we will LOG and then exit the\n> loop. If we didn't scan through all the entries in the directory, there is\n> a chance that we didn't fsync() all the files that need it.\n\nThanks. I get it. For syncing map files, we don't want to tolerate any\nerrors, whereas removal of the old map files (lesser than cutoff LSN)\ncan be tolerated in CheckPointLogicalRewriteHeap.\n\nHere's the v7 version using ReadDir for CheckPointLogicalRewriteHeap.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 3 Feb 2022 09:45:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Thu, Feb 03, 2022 at 09:45:08AM +0530, Bharath Rupireddy wrote:\n> On Thu, Feb 3, 2022 at 12:07 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> If there is a problem reading the directory, we will LOG and then exit the\n>> loop. If we didn't scan through all the entries in the directory, there is\n>> a chance that we didn't fsync() all the files that need it.\n> \n> Thanks. I get it. For syncing map files, we don't want to tolerate any\n> errors, whereas removal of the old map files (lesser than cutoff LSN)\n> can be tolerated in CheckPointLogicalRewriteHeap.\n\nLGTM. Andres noted upthread [0] that the comment above sscanf() about\nskipping editors' lock files might not be accurate. I don't think it's a\nhuge problem if sscanf() matches those files, but perhaps we can improve\nthe comment.\n\n[0] https://postgr.es/m/20220120194618.hmfd4kxkng2cgryh%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 16:03:40 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Fri, Feb 4, 2022 at 5:33 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > Thanks. I get it. For syncing map files, we don't want to tolerate any\n> > errors, whereas removal of the old map files (lesser than cutoff LSN)\n> > can be tolerated in CheckPointLogicalRewriteHeap.\n>\n> LGTM. Andres noted upthread [0] that the comment above sscanf() about\n> skipping editors' lock files might not be accurate. I don't think it's a\n> huge problem if sscanf() matches those files, but perhaps we can improve\n> the comment.\n>\n> [0] https://postgr.es/m/20220120194618.hmfd4kxkng2cgryh%40alap3.anarazel.de\n\nAndres comment from [0]:\n\n> An editor's lock file that starts with map- would presumably be the whole\n> filename plus an additional file-ending. But this check won't catch those.\n\nAgreed. sscanf checks can't detect the files named \"whole filename\nplus an additional file-ending\". I just checked with vi editor lock\nstate file .0-14ED3B8.snap.swp [1], the log generated is [2]. I'm not\nsure exactly which editor would create a lockfile like \"whole filename\nplus an additional file-ending\".\n\nIn any case, let's remove the editor's lock/state file from those\ncomments and have just only \"We just log a message if a file doesn't\nfit the pattern\". Attached v8 patch with that change.\n\n[1]\n-rw------- 1 bharath bharath 12288 Feb 10 15:48 .0-14ED3B8.snap.swp\n-rw------- 1 bharath bharath 128 Feb 10 15:48 0-14ED518.snap\n-rw------- 1 bharath bharath 128 Feb 10 15:49 0-14ED518.snap.lockfile\n-rw------- 1 bharath bharath 128 Feb 10 15:49 0-14ED550.snap\n-rw------- 1 bharath bharath 128 Feb 10 15:49 0-14ED600.snap\n\n[2]\n2022-02-10 15:48:47.938 UTC [1121678] LOG: could not parse file name\n\"pg_logical/snapshots/.0-14ED3B8.snap.swp\"\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 21:30:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Thu, Feb 10, 2022 at 09:30:45PM +0530, Bharath Rupireddy wrote:\n> In any case, let's remove the editor's lock/state file from those\n> comments and have just only \"We just log a message if a file doesn't\n> fit the pattern\". Attached v8 patch with that change.\n\nI've moved this one to ready-for-committer. I was under the impression\nthat Andres was firmly against this approach, but you did mention there was\nan off-list discussion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 11:39:30 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 21:30:45 +0530, Bharath Rupireddy wrote:\n> From 4801ff2c3b1e7bc7076205b676d4e3bc4a4ed308 Mon Sep 17 00:00:00 2001\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> Date: Thu, 10 Feb 2022 15:58:58 +0000\n> Subject: [PATCH v8] Replace ReadDir with ReadDirExtended\n> \n> Replace ReadDir with ReadDirExtended (in CheckPointSnapBuild) and\n> get rid of lstat entirely.\n\nI think this might be based on a slight misunderstanding / bad phrasing on my\npart. We can use get_dirent_type() to optimize away the lstat on most\nplatforms, ReadDirExtended itself doesn't do that automatically. I was trying\nto reference removing lstat calls by using get_dirent_type() in more places...\n\n\n> We still use ReadDir in CheckPointLogicalRewriteHeap\n> because unable to read directory would result a NULL from\n> ReadDirExtended and we may miss to fsync the remaining map files,\n> so here let's error out with ReadDir.\n\nThen why is this skipping the lstat?\n\n\n> Also, convert \"could not parse filename\" and \"could not remove file\"\n> errors to LOG messages in CheckPointLogicalRewriteHeap. This will\n> enable checkpoint not to waste the amount of work that it had done.\n\nI still doubt this is a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Feb 2022 09:09:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Feb 15, 2022 at 09:09:52AM -0800, Andres Freund wrote:\n> On 2022-02-10 21:30:45 +0530, Bharath Rupireddy wrote:\n>> Replace ReadDir with ReadDirExtended (in CheckPointSnapBuild) and\n>> get rid of lstat entirely.\n> \n> I think this might be based on a slight misunderstanding / bad phrasing on my\n> part. We can use get_dirent_type() to optimize away the lstat on most\n> platforms, ReadDirExtended itself doesn't do that automatically. I was trying\n> to reference removing lstat calls by using get_dirent_type() in more places...\n> \n> \n>> We still use ReadDir in CheckPointLogicalRewriteHeap\n>> because unable to read directory would result a NULL from\n>> ReadDirExtended and we may miss to fsync the remaining map files,\n>> so here let's error out with ReadDir.\n> \n> Then why is this skipping the lstat?\n> \n> \n>> Also, convert \"could not parse filename\" and \"could not remove file\"\n>> errors to LOG messages in CheckPointLogicalRewriteHeap. This will\n>> enable checkpoint not to waste the amount of work that it had done.\n> \n> I still doubt this is a good idea.\n\nIIUC you are advocating for something more like the attached patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 15 Feb 2022 09:57:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-02-15 09:57:53 -0800, Nathan Bossart wrote:\n> IIUC you are advocating for something more like the attached patches.\n\nAt least for patch 1 yes. Don't have the cycles just now to look at the\nothers.\n\nI generally think it'd be a good exercise to go through an use\nget_dirent_type() in nearly all ReadDir() style loops - it's a nice efficiency\nwin in general, and IIRC a particularly big one on windows.\n\nIt'd probably be good to add a reference to get_dirent_type() to\nReadDir[Extended]()'s docs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Feb 2022 10:10:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Feb 15, 2022 at 10:10:34AM -0800, Andres Freund wrote:\n> I generally think it'd be a good exercise to go through an use\n> get_dirent_type() in nearly all ReadDir() style loops - it's a nice efficiency\n> win in general, and IIRC a particularly big one on windows.\n> \n> It'd probably be good to add a reference to get_dirent_type() to\n> ReadDir[Extended]()'s docs.\n\nAgreed. I might give this a try.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 10:37:58 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Feb 15, 2022 at 10:37:58AM -0800, Nathan Bossart wrote:\n> On Tue, Feb 15, 2022 at 10:10:34AM -0800, Andres Freund wrote:\n>> I generally think it'd be a good exercise to go through an use\n>> get_dirent_type() in nearly all ReadDir() style loops - it's a nice efficiency\n>> win in general, and IIRC a particularly big one on windows.\n>> \n>> It'd probably be good to add a reference to get_dirent_type() to\n>> ReadDir[Extended]()'s docs.\n> \n> Agreed. I might give this a try.\n\nAlright, here is a new patch set where I've tried to replace as many\nstat()/lstat() calls as possible with get_dirent_type(). 0002 and 0003 are\nthe same as v9. I noticed a few remaining stat()/lstat() calls that don't\nappear to be doing proper error checking, but I haven't had a chance to try\nfixing those yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 15 Feb 2022 15:11:23 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:11 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Tue, Feb 15, 2022 at 10:37:58AM -0800, Nathan Bossart wrote:\n> > On Tue, Feb 15, 2022 at 10:10:34AM -0800, Andres Freund wrote:\n> >> I generally think it'd be a good exercise to go through an use\n> >> get_dirent_type() in nearly all ReadDir() style loops - it's a nice efficiency\n> >> win in general, and IIRC a particularly big one on windows.\n> >>\n> >> It'd probably be good to add a reference to get_dirent_type() to\n> >> ReadDir[Extended]()'s docs.\n> >\n> > Agreed. I might give this a try.\n>\n> Alright, here is a new patch set where I've tried to replace as many\n> stat()/lstat() calls as possible with get_dirent_type(). 0002 and 0003 are\n> the same as v9. I noticed a few remaining stat()/lstat() calls that don't\n> appear to be doing proper error checking, but I haven't had a chance to try\n> fixing those yet.\n\n0001: These get_dirent_type() conversions are nice. LGTM.\n\n0002:\n\n /* we're only handling directories here, skip if it's not ours */\n- if (lstat(path, &statbuf) == 0 && !S_ISDIR(statbuf.st_mode))\n+ if (lstat(path, &statbuf) != 0)\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not stat file \\\"%s\\\": %m\", path)));\n+ else if (!S_ISDIR(statbuf.st_mode))\n return;\n\nWhy is this a good place to silently ignore non-directories?\nStartupReorderBuffer() is already in charge of skipping random\ndetritus found in the directory, so would it be better to do \"if\n(get_dirent_type(...) != PGFILETYPE_DIR) continue\" there, and then\ndrop the lstat() stanza from ReorderBufferCleanupSeralizedTXNs()\ncompletely? Then perhaps its ReadDirExtended() shoud be using ERROR\ninstead of INFO, so that missing/non-dir/b0rked directories raise an\nerror. I don't understand why it's reporting readdir() errors at INFO\nbut unlink() errors at ERROR, and as far as I can see the other paths\nthat reach this code shouldn't be sending in paths to non-directories\nhere unless something is seriously busted and that's ERROR-worthy.\n\n0003: I haven't studied this cure vs disease thing... no opinion today.\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:17:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Thanks for taking a look!\n\nOn Thu, Mar 24, 2022 at 01:17:01PM +1300, Thomas Munro wrote:\n> /* we're only handling directories here, skip if it's not ours */\n> - if (lstat(path, &statbuf) == 0 && !S_ISDIR(statbuf.st_mode))\n> + if (lstat(path, &statbuf) != 0)\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not stat file \\\"%s\\\": %m\", path)));\n> + else if (!S_ISDIR(statbuf.st_mode))\n> return;\n> \n> Why is this a good place to silently ignore non-directories?\n> StartupReorderBuffer() is already in charge of skipping random\n> detritus found in the directory, so would it be better to do \"if\n> (get_dirent_type(...) != PGFILETYPE_DIR) continue\" there, and then\n> drop the lstat() stanza from ReorderBufferCleanupSeralizedTXNs()\n> completely? Then perhaps its ReadDirExtended() shoud be using ERROR\n> instead of INFO, so that missing/non-dir/b0rked directories raise an\n> error.\n\nMy guess is that this was done because ReorderBufferCleanupSerializedTXNs()\nis also called from ReorderBufferAllocate() and ReorderBufferFree().\nHowever, it is odd that we just silently return if the slot path isn't a\ndirectory in those cases. I think we could use get_dirent_type() in\nStartupReorderBuffer() as you suggested, and then we could let ReadDir()\nERROR for non-directories for the other callers of\nReorderBufferCleanupSerializedTXNs(). WDYT?\n\n> I don't understand why it's reporting readdir() errors at INFO\n> but unlink() errors at ERROR, and as far as I can see the other paths\n> that reach this code shouldn't be sending in paths to non-directories\n> here unless something is seriously busted and that's ERROR-worthy.\n\nI agree. I'll switch it to ReadDir() in the next revision so that we ERROR\ninstead of INFO.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 15:48:32 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Mar 29, 2022 at 03:48:32PM -0700, Nathan Bossart wrote:\n> On Thu, Mar 24, 2022 at 01:17:01PM +1300, Thomas Munro wrote:\n>> /* we're only handling directories here, skip if it's not ours */\n>> - if (lstat(path, &statbuf) == 0 && !S_ISDIR(statbuf.st_mode))\n>> + if (lstat(path, &statbuf) != 0)\n>> + ereport(ERROR,\n>> + (errcode_for_file_access(),\n>> + errmsg(\"could not stat file \\\"%s\\\": %m\", path)));\n>> + else if (!S_ISDIR(statbuf.st_mode))\n>> return;\n>> \n>> Why is this a good place to silently ignore non-directories?\n>> StartupReorderBuffer() is already in charge of skipping random\n>> detritus found in the directory, so would it be better to do \"if\n>> (get_dirent_type(...) != PGFILETYPE_DIR) continue\" there, and then\n>> drop the lstat() stanza from ReorderBufferCleanupSeralizedTXNs()\n>> completely? Then perhaps its ReadDirExtended() shoud be using ERROR\n>> instead of INFO, so that missing/non-dir/b0rked directories raise an\n>> error.\n> \n> My guess is that this was done because ReorderBufferCleanupSerializedTXNs()\n> is also called from ReorderBufferAllocate() and ReorderBufferFree().\n> However, it is odd that we just silently return if the slot path isn't a\n> directory in those cases. I think we could use get_dirent_type() in\n> StartupReorderBuffer() as you suggested, and then we could let ReadDir()\n> ERROR for non-directories for the other callers of\n> ReorderBufferCleanupSerializedTXNs(). WDYT?\n> \n>> I don't understand why it's reporting readdir() errors at INFO\n>> but unlink() errors at ERROR, and as far as I can see the other paths\n>> that reach this code shouldn't be sending in paths to non-directories\n>> here unless something is seriously busted and that's ERROR-worthy.\n> \n> I agree. I'll switch it to ReadDir() in the next revision so that we ERROR\n> instead of INFO.\n\nHere is an updated patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 30 Mar 2022 09:21:30 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Mar 30, 2022 at 09:21:30AM -0700, Nathan Bossart wrote:\n> Here is an updated patch set.\n\nrebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Apr 2022 13:18:57 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Sat, Apr 9, 2022 at 1:49 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 09:21:30AM -0700, Nathan Bossart wrote:\n> > Here is an updated patch set.\n>\n> rebased\n\nThanks.\n\n0001 - there are many places where lstat/stat is being used - don't we\nneed to replace all or most of them with get_dirent_type?\n\n0002 - I'm not quite happy with this patch, with the change,\ncheckpoint errors out, if it can't remove just a file - the comments\nthere says it all. Is there any strong reason for this change?\n\n- /*\n- * It's not particularly harmful, though strange, if we can't\n- * remove the file here. Don't prevent the checkpoint from\n- * completing, that'd be a cure worse than the disease.\n- */\n if (unlink(path) < 0)\n- {\n- ereport(LOG,\n+ ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not remove file \\\"%s\\\": %m\",\n path)));\n- continue;\n- }\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Jul 2022 21:39:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Fri, Jul 08, 2022 at 09:39:10PM +0530, Bharath Rupireddy wrote:\n> 0001 - there are many places where lstat/stat is being used - don't we\n> need to replace all or most of them with get_dirent_type?\n\nIt's been a while since I wrote this one, but I believe my intent was to\nreplace as many [l]stat() calls in ReadDir()-style loops as possible with\nget_dirent_type(). Are there any that I've missed?\n\n> 0002 - I'm not quite happy with this patch, with the change,\n> checkpoint errors out, if it can't remove just a file - the comments\n> there says it all. Is there any strong reason for this change?\n\nAndres noted several concerns upthread. In short, ignoring unexpected\nerrors makes them harder to debug and likely masks bugs.\n\nFWIW I agree that it is unfortunate that a relatively non-critical error\nhere leads to checkpoint failures, which can cause much worse problems down\nthe road. I think this is one of the reasons for moving tasks like this\nout of the checkpointer, as I'm trying to do with the proposed custodian\nprocess [0].\n\n[0] https://commitfest.postgresql.org/38/3448/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Jul 2022 10:14:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-04-08 13:18:57 -0700, Nathan Bossart wrote:\n> @@ -1035,32 +1036,9 @@ ParseConfigDirectory(const char *includedir,\n> \n> \t\tjoin_path_components(filename, directory, de->d_name);\n> \t\tcanonicalize_path(filename);\n> -\t\tif (stat(filename, &st) == 0)\n> +\t\tde_type = get_dirent_type(filename, de, true, elevel);\n> +\t\tif (de_type == PGFILETYPE_ERROR)\n> \t\t{\n> -\t\t\tif (!S_ISDIR(st.st_mode))\n> -\t\t\t{\n> -\t\t\t\t/* Add file to array, increasing its size in blocks of 32 */\n> -\t\t\t\tif (num_filenames >= size_filenames)\n> -\t\t\t\t{\n> -\t\t\t\t\tsize_filenames += 32;\n> -\t\t\t\t\tfilenames = (char **) repalloc(filenames,\n> -\t\t\t\t\t\t\t\t\t\t\tsize_filenames * sizeof(char *));\n> -\t\t\t\t}\n> -\t\t\t\tfilenames[num_filenames] = pstrdup(filename);\n> -\t\t\t\tnum_filenames++;\n> -\t\t\t}\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\t/*\n> -\t\t\t * stat does not care about permissions, so the most likely reason\n> -\t\t\t * a file can't be accessed now is if it was removed between the\n> -\t\t\t * directory listing and now.\n> -\t\t\t */\n> -\t\t\tereport(elevel,\n> -\t\t\t\t\t(errcode_for_file_access(),\n> -\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\",\n> -\t\t\t\t\t\t\tfilename)));\n> \t\t\trecord_config_file_error(psprintf(\"could not stat file \\\"%s\\\"\",\n> \t\t\t\t\t\t\t\t\t\t\t filename),\n> \t\t\t\t\t\t\t\t\t calling_file, calling_lineno,\n> @@ -1068,6 +1046,18 @@ ParseConfigDirectory(const char *includedir,\n> \t\t\tstatus = false;\n> \t\t\tgoto cleanup;\n> \t\t}\n> +\t\telse if (de_type != PGFILETYPE_DIR)\n> +\t\t{\n> +\t\t\t/* Add file to array, increasing its size in blocks of 32 */\n> +\t\t\tif (num_filenames >= size_filenames)\n> +\t\t\t{\n> +\t\t\t\tsize_filenames += 32;\n> +\t\t\t\tfilenames = (char **) repalloc(filenames,\n> +\t\t\t\t\t\t\t\t\t\t\t size_filenames * sizeof(char *));\n> +\t\t\t}\n> +\t\t\tfilenames[num_filenames] = pstrdup(filename);\n> +\t\t\tnum_filenames++;\n> +\t\t}\n> \t}\n> \n> \tif (num_filenames > 0)\n\nSeems like the diff would be easier to read if it didn't move code around as\nmuch?\n\nLooks pretty reasonable, I'd be happy to commit it, I think.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Jul 2022 13:25:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Mon, Jul 11, 2022 at 01:25:33PM -0700, Andres Freund wrote:\n> On 2022-04-08 13:18:57 -0700, Nathan Bossart wrote:\n>> @@ -1035,32 +1036,9 @@ ParseConfigDirectory(const char *includedir,\n>> \n>> \t\tjoin_path_components(filename, directory, de->d_name);\n>> \t\tcanonicalize_path(filename);\n>> -\t\tif (stat(filename, &st) == 0)\n>> +\t\tde_type = get_dirent_type(filename, de, true, elevel);\n>> +\t\tif (de_type == PGFILETYPE_ERROR)\n>> \t\t{\n>> -\t\t\tif (!S_ISDIR(st.st_mode))\n>> -\t\t\t{\n>> -\t\t\t\t/* Add file to array, increasing its size in blocks of 32 */\n>> -\t\t\t\tif (num_filenames >= size_filenames)\n>> -\t\t\t\t{\n>> -\t\t\t\t\tsize_filenames += 32;\n>> -\t\t\t\t\tfilenames = (char **) repalloc(filenames,\n>> -\t\t\t\t\t\t\t\t\t\t\tsize_filenames * sizeof(char *));\n>> -\t\t\t\t}\n>> -\t\t\t\tfilenames[num_filenames] = pstrdup(filename);\n>> -\t\t\t\tnum_filenames++;\n>> -\t\t\t}\n>> -\t\t}\n>> -\t\telse\n>> -\t\t{\n>> -\t\t\t/*\n>> -\t\t\t * stat does not care about permissions, so the most likely reason\n>> -\t\t\t * a file can't be accessed now is if it was removed between the\n>> -\t\t\t * directory listing and now.\n>> -\t\t\t */\n>> -\t\t\tereport(elevel,\n>> -\t\t\t\t\t(errcode_for_file_access(),\n>> -\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\",\n>> -\t\t\t\t\t\t\tfilename)));\n>> \t\t\trecord_config_file_error(psprintf(\"could not stat file \\\"%s\\\"\",\n>> \t\t\t\t\t\t\t\t\t\t\t filename),\n>> \t\t\t\t\t\t\t\t\t calling_file, calling_lineno,\n>> @@ -1068,6 +1046,18 @@ ParseConfigDirectory(const char *includedir,\n>> \t\t\tstatus = false;\n>> \t\t\tgoto cleanup;\n>> \t\t}\n>> +\t\telse if (de_type != PGFILETYPE_DIR)\n>> +\t\t{\n>> +\t\t\t/* Add file to array, increasing its size in blocks of 32 */\n>> +\t\t\tif (num_filenames >= size_filenames)\n>> +\t\t\t{\n>> +\t\t\t\tsize_filenames += 32;\n>> +\t\t\t\tfilenames = (char **) repalloc(filenames,\n>> +\t\t\t\t\t\t\t\t\t\t\t size_filenames * sizeof(char *));\n>> +\t\t\t}\n>> +\t\t\tfilenames[num_filenames] = pstrdup(filename);\n>> +\t\t\tnum_filenames++;\n>> +\t\t}\n>> \t}\n>> \n>> \tif (num_filenames > 0)\n> \n> Seems like the diff would be easier to read if it didn't move code around as\n> much?\n\nI opted to reorganize in order save an indent and simplify the conditions a\nbit. The alternative is something like this:\n\n\tif (de_type != PGFILETYPE_ERROR)\n\t{\n\t\tif (de_type != PGTFILETYPE_DIR)\n\t\t{\n\t\t\t...\n\t\t}\n\t}\n\telse\n\t{\n\t\t...\n\t}\n\nI don't feel strongly one way or another, and I'll change it if you think\nit's worth it to simplify the diff.\n\n> Looks pretty reasonable, I'd be happy to commit it, I think.\n\nAppreciate it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Jul 2022 14:23:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Fri, Jul 8, 2022 at 10:44 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> > 0002 - I'm not quite happy with this patch, with the change,\n> > checkpoint errors out, if it can't remove just a file - the comments\n> > there says it all. Is there any strong reason for this change?\n>\n> Andres noted several concerns upthread. In short, ignoring unexpected\n> errors makes them harder to debug and likely masks bugs.\n>\n> FWIW I agree that it is unfortunate that a relatively non-critical error\n> here leads to checkpoint failures, which can cause much worse problems down\n> the road. I think this is one of the reasons for moving tasks like this\n> out of the checkpointer, as I'm trying to do with the proposed custodian\n> process [0].\n>\n> [0] https://commitfest.postgresql.org/38/3448/\n\nIMHO, we can keep it as-is and if required can be changed as part of\nthe patch set [0], as this change without [0] can cause a checkpoint\nto fail. Furthermore, I would like it if we convert \"could not parse\nfilename\" and \"could not remove file\" ERRORs of\nCheckPointLogicalRewriteHeap to LOGs until [0] gets in - others may\nhave different opinions though.\n\nJust wondering - do we ever have a problem if we can't remove the\nsnapshot or mapping file?\n\nMay be unrelated, RemoveTempXlogFiles doesn't even bother to check if\nthe temp wal file is removed.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 18 Jul 2022 16:53:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Mon, Jul 18, 2022 at 04:53:18PM +0530, Bharath Rupireddy wrote:\n> Just wondering - do we ever have a problem if we can't remove the\n> snapshot or mapping file?\n\nBesides running out of disk space, there appears to be a transaction ID\nwraparound risk with the mappings files.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:18:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Working on get_dirent_type() reminded me of this thread. I was going\nto commit the first of these patches, but then I noticed Andres said\nhe was planning to, so I'll wait another day. Here they are, with\ncommit messages but otherwise unchanged from Nathan's v12 except for a\nslight comment tweak:\n\n- /* we're only handling directories here, skip if it's\nnot ours */\n+ /* we're only handling directories here, skip if it's not one */\n\nThe only hunk I'm having second thoughts about is the following, which\nmakes unexpected stray files break checkpoints:\n\n- * We just log a message if a file doesn't fit the pattern, it's\n- * probably some editors lock/state file or similar...\n */\n if (sscanf(snap_de->d_name, \"%X-%X.snap\", &hi, &lo) != 2)\n- {\n- ereport(LOG,\n+ ereport(ERROR,\n (errmsg(\"could not parse file\nname \\\"%s\\\"\", path)));\n\nBharath mentioned other places that loop over stat(), but I think\nthose are places that want stuff we don't already have, like st_size.", "msg_date": "Tue, 9 Aug 2022 15:10:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The only hunk I'm having second thoughts about is the following, which\n> makes unexpected stray files break checkpoints:\n\nSounds like a pretty bad idea. What's the upside?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 23:17:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> The only hunk I'm having second thoughts about is the following, which\n>> makes unexpected stray files break checkpoints:\n\n> Sounds like a pretty bad idea. What's the upside?\n\nActually, having now read the patch, I don't think there is any\npart of 0002 that is a good idea. It's blithely removing the\ncomments that explain why the existing coding is the way it is,\nand not providing a shred of justification for making checkpoints\nmore brittle.\n\nI have not tried to analyze the error-handling properties of 0001,\nbut if it's being equally cavalier then it shouldn't be committed\neither. Most of this behavior is the result of decades of hard-won\nexperience; discarding it because it doesn't fit conveniently\ninto some refactoring plan isn't smart.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 23:27:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 9, 2022 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Actually, having now read the patch, I don't think there is any\n> part of 0002 that is a good idea. It's blithely removing the\n> comments that explain why the existing coding is the way it is,\n> and not providing a shred of justification for making checkpoints\n> more brittle.\n\n0002 also contradicts the original $SUBJECT and goal of this thread,\nwhich is possibly why it was kept separate. I was only thinking of\ncommitting 0001 myself, which is the one I'd reviewed an earlier\nversion of.\n\n> I have not tried to analyze the error-handling properties of 0001,\n> but if it's being equally cavalier then it shouldn't be committed\n> either. Most of this behavior is the result of decades of hard-won\n> experience; discarding it because it doesn't fit conveniently\n> into some refactoring plan isn't smart.\n\n0001 does introduce new errors, as mentioned in the commit message, in\nthe form of elevel ERROR passed into get_dirent_type(), which might be\nthrown if your OS has no d_type and lstat() fails (also if you asked\nto follow symlinks, but in those cases errors were already thrown).\nBut in those cases, it seems at least a little fishy that we ignored\nerrors from the existing lstat(). I wondered if that was because they\nexpected that any failure meant ENOENT and they wanted to tolerate\nthat, but that does not seem to be the case, so I considered the error\nto be an improvement.\n\n\n", "msg_date": "Tue, 9 Aug 2022 15:41:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Aug 9, 2022 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I have not tried to analyze the error-handling properties of 0001,\n>> but if it's being equally cavalier then it shouldn't be committed\n>> either.\n\n> 0001 does introduce new errors, as mentioned in the commit message, in\n> the form of elevel ERROR passed into get_dirent_type(), which might be\n> thrown if your OS has no d_type and lstat() fails (also if you asked\n> to follow symlinks, but in those cases errors were already thrown).\n> But in those cases, it seems at least a little fishy that we ignored\n> errors from the existing lstat().\n\nHmmm ... I'll grant that ignoring lstat errors altogether isn't great.\nBut should the replacement behavior be elog-LOG-and-press-on,\nor elog-ERROR-and-fail-the-surrounding-operation? I'm not in any\nhurry to believe that the latter is more appropriate without some\nanalysis of what the callers are doing.\n\nThe bottom line here is that I'm distrustful of behavioral changes\nintroduced to simplify refactoring rather than to solve a live\nproblem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Aug 2022 23:50:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 9, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Tue, Aug 9, 2022 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I have not tried to analyze the error-handling properties of 0001,\n> >> but if it's being equally cavalier then it shouldn't be committed\n> >> either.\n>\n> > 0001 does introduce new errors, as mentioned in the commit message, in\n> > the form of elevel ERROR passed into get_dirent_type(), which might be\n> > thrown if your OS has no d_type and lstat() fails (also if you asked\n> > to follow symlinks, but in those cases errors were already thrown).\n> > But in those cases, it seems at least a little fishy that we ignored\n> > errors from the existing lstat().\n>\n> Hmmm ... I'll grant that ignoring lstat errors altogether isn't great.\n> But should the replacement behavior be elog-LOG-and-press-on,\n> or elog-ERROR-and-fail-the-surrounding-operation? I'm not in any\n> hurry to believe that the latter is more appropriate without some\n> analysis of what the callers are doing.\n>\n> The bottom line here is that I'm distrustful of behavioral changes\n> introduced to simplify refactoring rather than to solve a live\n> problem.\n\n+1. I agree with Tom not to change elog-LOG to elog-ERROR and fail the\ncheckpoint operation. Because the checkpoint is more important than\nwhy a single snapshot file (out thousands or even million files) isn't\nremoved at that moment. Also, I originally proposed to change\nelog-ERROR to elog-LOG in CheckPointLogicalRewriteHeap for unlink()\nfailures for the same reason.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Tue, 9 Aug 2022 09:29:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On 2022-08-09 15:10:41 +1200, Thomas Munro wrote:\n> I was going to commit the first of these patches, but then I noticed Andres\n> said he was planning to, so I'll wait another day.\n\nI'd be happy for you to take this on...\n\n\n", "msg_date": "Mon, 8 Aug 2022 21:07:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 09, 2022 at 09:29:16AM +0530, Bharath Rupireddy wrote:\n> On Tue, Aug 9, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmmm ... I'll grant that ignoring lstat errors altogether isn't great.\n>> But should the replacement behavior be elog-LOG-and-press-on,\n>> or elog-ERROR-and-fail-the-surrounding-operation? I'm not in any\n>> hurry to believe that the latter is more appropriate without some\n>> analysis of what the callers are doing.\n>>\n>> The bottom line here is that I'm distrustful of behavioral changes\n>> introduced to simplify refactoring rather than to solve a live\n>> problem.\n> \n> +1. I agree with Tom not to change elog-LOG to elog-ERROR and fail the\n> checkpoint operation. Because the checkpoint is more important than\n> why a single snapshot file (out thousands or even million files) isn't\n> removed at that moment. Also, I originally proposed to change\n> elog-ERROR to elog-LOG in CheckPointLogicalRewriteHeap for unlink()\n> failures for the same reason.\n\nThis was my initial instinct as well, but this thread has received\ncontradictory feedback during the months since.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Aug 2022 21:28:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 9, 2022 at 4:28 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Tue, Aug 09, 2022 at 09:29:16AM +0530, Bharath Rupireddy wrote:\n> > On Tue, Aug 9, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmmm ... I'll grant that ignoring lstat errors altogether isn't great.\n> >> But should the replacement behavior be elog-LOG-and-press-on,\n> >> or elog-ERROR-and-fail-the-surrounding-operation? I'm not in any\n> >> hurry to believe that the latter is more appropriate without some\n> >> analysis of what the callers are doing.\n> >>\n> >> The bottom line here is that I'm distrustful of behavioral changes\n> >> introduced to simplify refactoring rather than to solve a live\n> >> problem.\n> >\n> > +1. I agree with Tom not to change elog-LOG to elog-ERROR and fail the\n> > checkpoint operation. Because the checkpoint is more important than\n\nThe changes were not from elog-LOG to elog-ERROR, they were changes\nfrom silently eating errors, but I take your (plural) general point.\n\n> > why a single snapshot file (out thousands or even million files) isn't\n> > removed at that moment. Also, I originally proposed to change\n> > elog-ERROR to elog-LOG in CheckPointLogicalRewriteHeap for unlink()\n> > failures for the same reason.\n>\n> This was my initial instinct as well, but this thread has received\n> contradictory feedback during the months since.\n\nOK, so there aren't many places in 0001 that error out where we\npreviously did not.\n\nFor the one in CheckPointLogicalRewriteHeap(), if it fails, you don't\njust want to skip -- it might be in the range that we need to fsync().\nSo what if we just deleted that get_dirent_type() != PGFILETYPE_REG\ncheck altogether? Depending on the name range check, we'd either\nattempt to unlink() and LOG if that fails, or we'd attempt to\nopen()-or-ERROR and then fsync()-or-PANIC. What functionality have we\nlost without the DT_REG check? Well, now if someone ill-advisedly\nputs an important character device, socket, symlink (ie any kind of\nnon-DT_REG) into that directory with a name conforming to the\nunlinkable range, we'll try to unlink it and possibly succeed, where\npreviously we'd have skipped, and if it's in the checkpointable range,\nwe'd try to fsync() it and likely fail, which is good because this is\na supposed to be a checkpoint.\n\nThat's already what happens if lstat() fails in master. We fall back\nto treating it like a DT_REG anyway:\n\n if (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))\n continue;\n\nFor the one in CheckSnapBuild(), it's similar but unlink only.\n\nFor the one in StartupReplicationSlots(), you could possibly take the\nsame line: if it's not a directory, we'll try an rmdir() it anyway and\nthen emit a WARNING if that fails. Again, that's exactly what master\nwould do if that lstat() failed.\n\nIn other words, if we're going to LOG and press on, maybe we should\njust press on anyway. Would the error message be any less\ninformative? Would the risk of unlinking a character device that you\naccidentally stored in\n/clusters/pg16-main/pgdata/pg_logical/mappings/map-1234-5678 be a\nproblem for anyone?\n\nSeparately from that topic, it would be nice to have a comment in each\nplace where we tolerate unlink() failure to explain why that is\nharmless except for wasted disk.\n\n\n", "msg_date": "Tue, 9 Aug 2022 17:26:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 09, 2022 at 05:26:39PM +1200, Thomas Munro wrote:\n> The changes were not from elog-LOG to elog-ERROR, they were changes\n> from silently eating errors, but I take your (plural) general point.\n\nI think this was in reference to 0002. My comment was, at least.\n\n> OK, so there aren't many places in 0001 that error out where we\n> previously did not.\n> \n> For the one in CheckPointLogicalRewriteHeap(), if it fails, you don't\n> just want to skip -- it might be in the range that we need to fsync().\n> So what if we just deleted that get_dirent_type() != PGFILETYPE_REG\n> check altogether? Depending on the name range check, we'd either\n> attempt to unlink() and LOG if that fails, or we'd attempt to\n> open()-or-ERROR and then fsync()-or-PANIC.\n\nIt looks like CheckPointLogicalRewriteHeap() presently ERRORs when unlink()\nfails. However, CheckPointSnapBuild() only LOGs when unlink() fails.\nSince mappings files contain transaction IDs, persistent unlink() failures\nmight lead to wraparound, but I don't think failing checkpoints will help\nimprove matters.\n\nBharath's original patch changed CheckPointLogicalRewriteHeap() to LOG on\nsscanf() and unlink() failures. IIUC things would work the way you suggest\nif that was applied.\n\n> What functionality have we\n> lost without the DT_REG check? Well, now if someone ill-advisedly\n> puts an important character device, socket, symlink (ie any kind of\n> non-DT_REG) into that directory with a name conforming to the\n> unlinkable range, we'll try to unlink it and possibly succeed, where\n> previously we'd have skipped, and if it's in the checkpointable range,\n> we'd try to fsync() it and likely fail, which is good because this is\n> a supposed to be a checkpoint.\n\nI don't know that I agree that the last part is good. Presumably we don't\nwant checkpointing to fail forever because there's a random non-regular\nfile that can't be fsync'd. But it's not clear why such files would exist\nin the first place. Presumably this isn't the sort of thing that Postgres\nshould be expected to support.\n\n> In other words, if we're going to LOG and press on, maybe we should\n> just press on anyway. Would the error message be any less\n> informative? Would the risk of unlinking a character device that you\n> accidentally stored in\n> /clusters/pg16-main/pgdata/pg_logical/mappings/map-1234-5678 be a\n> problem for anyone?\n\nProbably not.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Aug 2022 11:38:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 9, 2022 at 10:57 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> OK, so there aren't many places in 0001 that error out where we\n> previously did not.\n\nWell, that's not true I believe. The 0001 patch introduces\nget_dirent_type() with elevel ERROR which means that lstat failures\nare now reported as ERRORs with ereport(ERROR, as opposed to\ncontinuing on the HEAD.\n\n- if (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))\n+ if (get_dirent_type(path, mapping_de, false, ERROR) != PGFILETYPE_REG)\n continue;\n\n- if (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))\n+ if (get_dirent_type(path, snap_de, false, ERROR) != PGFILETYPE_REG)\n {\n\nso on.\n\nThe main idea of using get_dirent_type() instead of lstat or stat is\nto benefit from avoiding system calls on platforms where the directory\nentry type is stored in dirent structure, but not to cause errors for\nlstat or stat system calls failures where we previously didn't. I\nthink 0001 must be just replacing lstat or stat with get_dirent_type()\nwith elevel ERROR if lstat or stat failure is treated as ERROR\npreviously, otherwise with elevel LOG. I modified it as attached. Once\nthis patch gets committed, then we can continue the discussion as to\nwhether we make elog-LOG to elog-ERROR or vice-versa.\n\nI'm tempted to use get_dirent_type() in RemoveXlogFile() but that\nrequires us to pass struct dirent * from RemoveOldXlogFiles() (as\nattached 0002 for now), If done, this avoids one lstat() system call\nper WAL file. IMO, that's okay to pass an extra function parameter to\nRemoveXlogFile() as it is a static function and there can be many\n(thousands of) WAL files at times, so saving system calls (lstat() *\nnumber of WAL files) is definitely an advantage.\n\nThoughts?\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Wed, 10 Aug 2022 12:44:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 10, 2022 at 7:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> The main idea of using get_dirent_type() instead of lstat or stat is\n> to benefit from avoiding system calls on platforms where the directory\n> entry type is stored in dirent structure, but not to cause errors for\n> lstat or stat system calls failures where we previously didn't.\n\nYeah, I get it... I'm OK with separating mechanical changes like\nlstat() -> get_dirent_type() from policy changes on error handling. I\ninitially thought the ERROR was surely better than what we're doing\nnow (making extra system calls to avoid unlinking unexpected things,\nfor which our only strategy on failure is to try to unlink the thing\nanyway), but it's better to discuss that separately.\n\n> I\n> think 0001 must be just replacing lstat or stat with get_dirent_type()\n> with elevel ERROR if lstat or stat failure is treated as ERROR\n> previously, otherwise with elevel LOG. I modified it as attached. Once\n> this patch gets committed, then we can continue the discussion as to\n> whether we make elog-LOG to elog-ERROR or vice-versa.\n\nAgreed, will look at this version tomorrow.\n\n> I'm tempted to use get_dirent_type() in RemoveXlogFile() but that\n> requires us to pass struct dirent * from RemoveOldXlogFiles() (as\n> attached 0002 for now), If done, this avoids one lstat() system call\n> per WAL file. IMO, that's okay to pass an extra function parameter to\n> RemoveXlogFile() as it is a static function and there can be many\n> (thousands of) WAL files at times, so saving system calls (lstat() *\n> number of WAL files) is definitely an advantage.\n\n- lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) &&\n+ get_dirent_type(path, xlde, false, LOG) != PGFILETYPE_REG &&\n\nI think you wanted ==, but maybe it'd be nicer to pass in a\n\"recyclable\" boolean to RemoveXlogFile()?\n\nIf you're looking for more, rmtree() looks like another.\n\n\n", "msg_date": "Wed, 10 Aug 2022 20:41:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 10, 2022 at 2:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Aug 10, 2022 at 7:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > The main idea of using get_dirent_type() instead of lstat or stat is\n> > to benefit from avoiding system calls on platforms where the directory\n> > entry type is stored in dirent structure, but not to cause errors for\n> > lstat or stat system calls failures where we previously didn't.\n>\n> Yeah, I get it... I'm OK with separating mechanical changes like\n> lstat() -> get_dirent_type() from policy changes on error handling. I\n> initially thought the ERROR was surely better than what we're doing\n> now (making extra system calls to avoid unlinking unexpected things,\n> for which our only strategy on failure is to try to unlink the thing\n> anyway), but it's better to discuss that separately.\n>\n> > I\n> > think 0001 must be just replacing lstat or stat with get_dirent_type()\n> > with elevel ERROR if lstat or stat failure is treated as ERROR\n> > previously, otherwise with elevel LOG. I modified it as attached. Once\n> > this patch gets committed, then we can continue the discussion as to\n> > whether we make elog-LOG to elog-ERROR or vice-versa.\n>\n> Agreed, will look at this version tomorrow.\n\nThanks.\n\n> > I'm tempted to use get_dirent_type() in RemoveXlogFile() but that\n> > requires us to pass struct dirent * from RemoveOldXlogFiles() (as\n> > attached 0002 for now), If done, this avoids one lstat() system call\n> > per WAL file. IMO, that's okay to pass an extra function parameter to\n> > RemoveXlogFile() as it is a static function and there can be many\n> > (thousands of) WAL files at times, so saving system calls (lstat() *\n> > number of WAL files) is definitely an advantage.\n>\n> - lstat(path, &statbuf) == 0 && S_ISREG(statbuf.st_mode) &&\n> + get_dirent_type(path, xlde, false, LOG) != PGFILETYPE_REG &&\n>\n> I think you wanted ==, but maybe it'd be nicer to pass in a\n> \"recyclable\" boolean to RemoveXlogFile()?\n\nAh, my bad, I corrected it now.\n\nIf you mean to do \"bool recyclable = (get_dirent_type(path, xlde,\nfalse, LOG) == PGFILETYPE_REG)\"\" and pass it as parameter to\nRemoveXlogFile(), then we have to do get_dirent_type calls for every\nWAL file even when any of (wal_recycle && *endlogSegNo <= recycleSegNo\n&& XLogCtl->InstallXLogFileSegmentActive) is false which is\nunnecessary and it's better to pass dirent structure as a parameter.\n\n> If you're looking for more, rmtree() looks like another.\n\nAre you suggesting to not use pgfnames() in rmtree() and do\nopendir()-readdir()-get_dirent_type() directly? If yes, I don't think\nit's a great idea because rmtree() has recursive calls and holding\nopendir()-readdir() for all rmtree() recursive calls without\nclosedir() may eat up dynamic memory if there are deeper level\ndirectories. I would like to leave it that way. Do you have any other\nideas in mind?\n\nPlease find the v15 patch, I merged the RemoveXlogFile() changes into 0001.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Wed, 10 Aug 2022 15:28:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 10, 2022 at 03:28:25PM +0530, Bharath Rupireddy wrote:\n> \t\tsnprintf(path, sizeof(path), \"pg_logical/mappings/%s\", mapping_de->d_name);\n> -\t\tif (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))\n> +\t\tif (get_dirent_type(path, mapping_de, false, LOG) != PGFILETYPE_REG)\n> \t\t\tcontinue;\n\nPreviously, failure to lstat() wouldn't lead to skipping the entry. With\nthis patch, a failure to determine the file type will cause the entry to be\nskipped. This might be okay in some places (e.g., CheckPointSnapBuild())\nbut not in others. For example, in CheckPointLogicalRewriteHeap(), this\ncould cause us to skip fsync-ing a file due to a get_dirent_type() failure,\nwhich seems bad.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Aug 2022 13:32:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 17, 2022 at 2:02 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 10, 2022 at 03:28:25PM +0530, Bharath Rupireddy wrote:\n> > snprintf(path, sizeof(path), \"pg_logical/mappings/%s\", mapping_de->d_name);\n> > - if (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))\n> > + if (get_dirent_type(path, mapping_de, false, LOG) != PGFILETYPE_REG)\n> > continue;\n>\n> Previously, failure to lstat() wouldn't lead to skipping the entry. With\n> this patch, a failure to determine the file type will cause the entry to be\n> skipped. This might be okay in some places (e.g., CheckPointSnapBuild())\n> but not in others. For example, in CheckPointLogicalRewriteHeap(), this\n> could cause us to skip fsync-ing a file due to a get_dirent_type() failure,\n> which seems bad.\n\nHm. I corrected it in the v16 patch, please review.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Wed, 17 Aug 2022 12:39:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 17, 2022 at 12:39:26PM +0530, Bharath Rupireddy wrote:\n> +\t/*\n> +\t * We're only handling directories here, skip if it's not ours. Also, skip\n> +\t * if the caller has already performed this check.\n> +\t */\n> +\tif (!slot_dir_checked &&\n> +\t\tlstat(path, &statbuf) == 0 && !S_ISDIR(statbuf.st_mode))\n> \t\treturn;\n\nThere was previous discussion about removing this stanza from\nReorderBufferCleanupSeralizedTXNs() completely [0]. If that is still\npossible, I think I would choose that over having callers specify whether\nto do the directory check.\n\n> +\t\t/* we're only handling directories here, skip if it's not one */\n> +\t\tsprintf(path, \"pg_replslot/%s\", logical_de->d_name);\n> +\t\tif (get_dirent_type(path, logical_de, false, LOG) != PGFILETYPE_DIR)\n> +\t\t\tcontinue;\n\nIIUC an error in get_dirent_type() could cause slots to be skipped here,\nwhich is a behavior change.\n\n[0] https://postgr.es/m/20220329224832.GA560657%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Aug 2022 11:07:44 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Tue, Aug 23, 2022 at 11:37 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 17, 2022 at 12:39:26PM +0530, Bharath Rupireddy wrote:\n> > + /*\n> > + * We're only handling directories here, skip if it's not ours. Also, skip\n> > + * if the caller has already performed this check.\n> > + */\n> > + if (!slot_dir_checked &&\n> > + lstat(path, &statbuf) == 0 && !S_ISDIR(statbuf.st_mode))\n> > return;\n>\n> There was previous discussion about removing this stanza from\n> ReorderBufferCleanupSeralizedTXNs() completely [0]. If that is still\n> possible, I think I would choose that over having callers specify whether\n> to do the directory check.\n>\n> [0] https://postgr.es/m/20220329224832.GA560657%40nathanxps13\n\n From [0]:\n\n> My guess is that this was done because ReorderBufferCleanupSerializedTXNs()\n> is also called from ReorderBufferAllocate() and ReorderBufferFree().\n> However, it is odd that we just silently return if the slot path isn't a\n> directory in those cases. I think we could use get_dirent_type() in\n> StartupReorderBuffer() as you suggested, and then we could let ReadDir()\n> ERROR for non-directories for the other callers of\n> ReorderBufferCleanupSerializedTXNs(). WDYT?\n\nFirstly, removing lstat() completely from\nReorderBufferCleanupSerializedTXNs() is a behavioural change.\nReorderBufferCleanupSerializedTXNs() returning silently to callers\nReorderBufferAllocate() and ReorderBufferFree(), when the slot path\nisn't a directory completely makes sense to me because the callers are\ntrying to clean something and if it doesn't exist that's okay, they\ncan go ahead. Also, the usage of the ReorderBufferAllocate() and\nReorderBufferFree() is pretty wide and I don't want to risk the new\nbehaviour.\n\n> > + /* we're only handling directories here, skip if it's not one */\n> > + sprintf(path, \"pg_replslot/%s\", logical_de->d_name);\n> > + if (get_dirent_type(path, logical_de, false, LOG) != PGFILETYPE_DIR)\n> > + continue;\n>\n> IIUC an error in get_dirent_type() could cause slots to be skipped here,\n> which is a behavior change.\n\nThat behaviour hasn't changed, no? Currently, if lstat() fails in\nReorderBufferCleanupSerializedTXNs() it returns to\nStartupReorderBuffer()'s while loop which is in a way continuing with\nthe other slots, this patch does nothing new.\n\nSo, no new patch, please find the latest v16 patch at [1].\n\n[1] https://www.postgresql.org/message-id/CALj2ACXZPMYXrPZ%2BCUE0_5DKDnxyqDGRftSgPPx--PWhWB3RtQ%40mail.gmail.com\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Thu, 25 Aug 2022 10:48:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Thu, Aug 25, 2022 at 10:48:08AM +0530, Bharath Rupireddy wrote:\n> On Tue, Aug 23, 2022 at 11:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> IIUC an error in get_dirent_type() could cause slots to be skipped here,\n>> which is a behavior change.\n> \n> That behaviour hasn't changed, no? Currently, if lstat() fails in\n> ReorderBufferCleanupSerializedTXNs() it returns to\n> StartupReorderBuffer()'s while loop which is in a way continuing with\n> the other slots, this patch does nothing new.\n\nAre you sure? FWIW, the changes in reorderbuffer.c for\nReorderBufferCleanupSerializedTXNs() reduce the code readability, in\nmy opinion, so that's one less argument in favor of this change.\n\nThe gain in ParseConfigDirectory() is kind of cool.\npg_tzenumerate_next(), copydir(), RemoveXlogFile()\nStartupReplicationSlots(), CheckPointLogicalRewriteHeap() and\nRemovePgTempFilesInDir() seem fine, as well. At least these avoid\nextra lstat() calls when the file type is unknown, which would be only\na limited number of users where some of the three DT_* are missing\n(right?).\n--\nMichael", "msg_date": "Thu, 25 Aug 2022 21:21:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Thu, Aug 25, 2022 at 5:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> FWIW, the changes in reorderbuffer.c for\n> ReorderBufferCleanupSerializedTXNs() reduce the code readability, in\n> my opinion, so that's one less argument in favor of this change.\n\nAgreed. I reverted the changes.\n\n> The gain in ParseConfigDirectory() is kind of cool.\n> pg_tzenumerate_next(), copydir(), RemoveXlogFile()\n> StartupReplicationSlots(), CheckPointLogicalRewriteHeap() and\n> RemovePgTempFilesInDir() seem fine, as well. At least these avoid\n> extra lstat() calls when the file type is unknown, which would be only\n> a limited number of users where some of the three DT_* are missing\n> (right?).\n\nYes, the idea is to avoid lstat() system calls when possible.\n\nPSA v17 patch with reorderbuffer.c changes reverted.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Sat, 27 Aug 2022 14:06:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Sat, Aug 27, 2022 at 02:06:32PM +0530, Bharath Rupireddy wrote:\n> PSA v17 patch with reorderbuffer.c changes reverted.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 28 Aug 2022 14:52:43 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Sun, Aug 28, 2022 at 02:52:43PM -0700, Nathan Bossart wrote:\n> On Sat, Aug 27, 2022 at 02:06:32PM +0530, Bharath Rupireddy wrote:\n>> PSA v17 patch with reorderbuffer.c changes reverted.\n> \n> LGTM\n\nYeah, what you have here looks pretty fine to me, so this is IMO\ncommittable. Let's wait a bit to see if there are any objections from\nthe others, in case.\n--\nMichael", "msg_date": "Wed, 31 Aug 2022 16:10:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-08-27 14:06:32 +0530, Bharath Rupireddy wrote:\n> @@ -50,7 +51,7 @@ copydir(char *fromdir, char *todir, bool recurse)\n> \n> \twhile ((xlde = ReadDir(xldir, fromdir)) != NULL)\n> \t{\n> -\t\tstruct stat fst;\n> +\t\tPGFileType\txlde_type;\n> \n> \t\t/* If we got a cancel signal during the copy of the directory, quit */\n> \t\tCHECK_FOR_INTERRUPTS();\n> @@ -62,18 +63,15 @@ copydir(char *fromdir, char *todir, bool recurse)\n> \t\tsnprintf(fromfile, sizeof(fromfile), \"%s/%s\", fromdir, xlde->d_name);\n> \t\tsnprintf(tofile, sizeof(tofile), \"%s/%s\", todir, xlde->d_name);\n> \n> -\t\tif (lstat(fromfile, &fst) < 0)\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode_for_file_access(),\n> -\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\", fromfile)));\n> +\t\txlde_type = get_dirent_type(fromfile, xlde, false, ERROR);\n> \n> -\t\tif (S_ISDIR(fst.st_mode))\n> +\t\tif (xlde_type == PGFILETYPE_DIR)\n> \t\t{\n> \t\t\t/* recurse to handle subdirectories */\n> \t\t\tif (recurse)\n> \t\t\t\tcopydir(fromfile, tofile, true);\n> \t\t}\n> -\t\telse if (S_ISREG(fst.st_mode))\n> +\t\telse if (xlde_type == PGFILETYPE_REG)\n> \t\t\tcopy_file(fromfile, tofile);\n> \t}\n> \tFreeDir(xldir);\n\nIt continues to make no sense to me to add behaviour changes around\nerror-handling as part of a conversion to get_dirent_type(). I don't at all\nunderstand why e.g. the above change to make copydir() silently skip over\nfiles it can't stat is ok?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Aug 2022 12:14:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 31, 2022 at 12:14:33PM -0700, Andres Freund wrote:\n> On 2022-08-27 14:06:32 +0530, Bharath Rupireddy wrote:\n>> -\t\tif (lstat(fromfile, &fst) < 0)\n>> -\t\t\tereport(ERROR,\n>> -\t\t\t\t\t(errcode_for_file_access(),\n>> -\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\", fromfile)));\n>> +\t\txlde_type = get_dirent_type(fromfile, xlde, false, ERROR);\n>> \n>> -\t\tif (S_ISDIR(fst.st_mode))\n>> +\t\tif (xlde_type == PGFILETYPE_DIR)\n>> \t\t{\n>> \t\t\t/* recurse to handle subdirectories */\n>> \t\t\tif (recurse)\n>> \t\t\t\tcopydir(fromfile, tofile, true);\n>> \t\t}\n>> -\t\telse if (S_ISREG(fst.st_mode))\n>> +\t\telse if (xlde_type == PGFILETYPE_REG)\n>> \t\t\tcopy_file(fromfile, tofile);\n>> \t}\n>> \tFreeDir(xldir);\n> \n> It continues to make no sense to me to add behaviour changes around\n> error-handling as part of a conversion to get_dirent_type(). I don't at all\n> understand why e.g. the above change to make copydir() silently skip over\n> files it can't stat is ok?\n\nIn this example, the call to get_dirent_type() should ERROR if the call to\nlstat() fails (the \"elevel\" argument is set to ERROR).\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 31 Aug 2022 12:24:28 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "Hi,\n\nOn 2022-08-31 12:24:28 -0700, Nathan Bossart wrote:\n> On Wed, Aug 31, 2022 at 12:14:33PM -0700, Andres Freund wrote:\n> > On 2022-08-27 14:06:32 +0530, Bharath Rupireddy wrote:\n> > It continues to make no sense to me to add behaviour changes around\n> > error-handling as part of a conversion to get_dirent_type(). I don't at all\n> > understand why e.g. the above change to make copydir() silently skip over\n> > files it can't stat is ok?\n> \n> In this example, the call to get_dirent_type() should ERROR if the call to\n> lstat() fails (the \"elevel\" argument is set to ERROR).\n\nOh, oops. Skimmed code too quickly...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Aug 2022 12:28:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Wed, Aug 31, 2022 at 04:10:59PM +0900, Michael Paquier wrote:\n> Yeah, what you have here looks pretty fine to me, so this is IMO\n> committable. Let's wait a bit to see if there are any objections from\n> the others, in case.\n\nI had a few hours and I've spent them looking at what you had here in\ndetails, and there were a few things I have tweaked before applying\nthe patch. First, elevel was set to LOG for three calls of\nget_dirent_type() in code paths where we want to skip entries. This\nwould have become very noisy, so I've switched two of them to DEBUG1\nand the third one to DEBUG2 for consistency with the surroundings. A\nsecond thing was RemoveXlogFile(), where we passed down a dirent entry\n*and* its d_name. With this design, it would be easy to mess up\nthings and pass down a file name that does not match with its dirent\nentry, so I have finished by replacing \"segname\" by the dirent\nstructure. A last thing was about the two extra comment blocks in\nfd.c, and I could not convince myself that these were helpful hints so\nI have removed both of them.\n--\nMichael", "msg_date": "Fri, 2 Sep 2022 17:09:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Fri, Sep 02, 2022 at 05:09:14PM +0900, Michael Paquier wrote:\n> I had a few hours and I've spent them looking at what you had here in\n> details, and there were a few things I have tweaked before applying\n> the patch.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Sep 2022 15:08:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" }, { "msg_contents": "On Sat, Sep 3, 2022 at 3:09 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Fri, Sep 02, 2022 at 05:09:14PM +0900, Michael Paquier wrote:\n> > I had a few hours and I've spent them looking at what you had here in\n> > details, and there were a few things I have tweaked before applying\n> > the patch.\n>\n> Thanks!\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n> Hi,\n\nYour patch require a rebase. Please provide the latest rebase patch.\n\n=== applying patch\n./v17-0001-Make-more-use-of-get_dirent_type-in-place-of-sta.patch\npatching file src/backend/access/heap/rewriteheap.c\nHunk #1 FAILED at 113.\nHunk #2 FAILED at 1213.\nHunk #3 FAILED at 1221.\n3 out of 3 hunks FAILED -- saving rejects to file\nsrc/backend/access/heap/rewriteheap.c.rej\n\n\n-- \nIbrar Ahmed\n\nOn Sat, Sep 3, 2022 at 3:09 AM Nathan Bossart <nathandbossart@gmail.com> wrote:On Fri, Sep 02, 2022 at 05:09:14PM +0900, Michael Paquier wrote:\n> I had a few hours and I've spent them looking at what you had here in\n> details, and there were a few things I have tweaked before applying\n> the patch.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\nHi,Your patch require a rebase. Please provide the latest rebase patch.=== applying patch ./v17-0001-Make-more-use-of-get_dirent_type-in-place-of-sta.patch\npatching file src/backend/access/heap/rewriteheap.c\nHunk #1 FAILED at 113.\nHunk #2 FAILED at 1213.\nHunk #3 FAILED at 1221.\n3 out of 3 hunks FAILED -- saving rejects to file src/backend/access/heap/rewriteheap.c.rej\n-- Ibrar Ahmed", "msg_date": "Sat, 3 Sep 2022 10:47:32 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical rewrite\n files to save checkpoint work" }, { "msg_contents": "On Sat, Sep 03, 2022 at 10:47:32AM +0500, Ibrar Ahmed wrote:\n> Your patch require a rebase. Please provide the latest rebase patch.\n\nI was looking for a CF entry yesterday when I looked at this patch,\nmissing https://commitfest.postgresql.org/39/3496/. This has been\napplied as of bfb9dfd.\n--\nMichael", "msg_date": "Sat, 3 Sep 2022 17:06:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid erroring out when unable to remove or parse logical\n rewrite files to save checkpoint work" } ]